0% found this document useful (0 votes)
6 views98 pages

Copy of UNIT III GRAPHS-converted

Uploaded by

muthumadathicse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views98 pages

Copy of UNIT III GRAPHS-converted

Uploaded by

muthumadathicse
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 98

UNIT III GRAPHS 9

Elementary Graph Algorithms: Representations of Graphs – Breadth-First Search – Depth-First


Search – Topological Sort – Strongly Connected Components- Minimum Spanning Trees:
Growing a Minimum Spanning Tree – Kruskal and Prim- Single-Source Shortest Paths: The
Bellman-Ford algorithm – Single-Source Shortest paths in Directed Acyclic Graphs –
Dijkstra‘s Algorithm; Dynamic Programming - All-Pairs Shortest Paths: Shortest Paths and
Matrix Multiplication – The Floyd-Warshall Algorithm

Elementary Graph Algorithms


Graph

A graph can be defined as group of vertices and edges that are used to connect these vertices.
A graph can be seen as a cyclic tree, where the vertices (Nodes) maintain any complex
relationship among them instead of having parent child relationship.

Definition

A graph G can be defined as an ordered set G(V, E) where V(G) represents the set of vertices
and E(G) represents the set of edges which are used to connect these vertices.

A Graph G(V, E) with 5 vertices (A, B, C, D, E) and six edges ((A,B), (B,C), (C,E), (E,D),
(D,B), (D,A)) is shown in the following figure.

Directed and Undirected Graph

A graph can be directed or undirected. However, in an undirected graph, edges are not
associated with the directions with them. An undirected graph is shown in the above figure
since its edges are not attached with any of the directions. If an edge exists between vertex A
and B then the vertices can be traversed from B to A as well as A to B.

22.9M
460
History of Java

In a directed graph, edges form an ordered pair. Edges represent a specific path from some
vertex A to another vertex B. Node A is called initial node while node B is called terminal
node.

A directed graph is shown in the following figure.

Graph Terminology

Path

A path can be defined as the sequence of nodes that are followed in order to reach some terminal
node V from the initial node U.

Closed Path

A path will be called as closed path if the initial node is same as terminal node. A path will be
closed path if V0=VN.

Simple Path

If all the nodes of the graph are distinct with an exception V0=VN, then such path P is called as
closed simple path.

Cycle

A cycle can be defined as the path which has no repeated edges or vertices except the first and
last vertices.

Connected Graph
A connected graph is the one in which some path exists between every two vertices (u, v) in
V. There are no isolated nodes in connected graph.

Complete Graph

A complete graph is the one in which every node is connected with all other nodes. A complete
graph contain n(n-1)/2 edges where n is the number of nodes in the graph.

Weighted Graph

In a weighted graph, each edge is assigned with some data such as length or weight. The weight
of an edge e can be given as w(e) which must be a positive (+) value indicating the cost of
traversing the edge.

Digraph

A digraph is a directed graph in which each edge of the graph is associated with some direction
and the traversing can be done only in the specified direction.

Loop

An edge that is associated with the similar end points can be called as Loop.

Adjacent Nodes

If two nodes u and v are connected via an edge e, then the nodes u and v are called as neighbours
or adjacent nodes.

Degree of the Node

A degree of a node is the number of edges that are connected with that node. A node with
degree 0 is called as isolated node.
Graph Representation

By Graph representation, we simply mean the technique which is to be used in order to store
some graph into the computer's memory.

There are two ways to store Graph into the computer's memory. In this part of this tutorial, we
discuss each one of them in detail.

1. Sequential Representation

In sequential representation, we use adjacency matrix to store the mapping represented by


vertices and edges. In adjacency matrix, the rows and columns are represented by the graph
vertices. A graph having n vertices, will have a dimension n x n.

An entry Mij in the adjacency matrix representation of an undirected graph G will be 1 if there
exists an edge between Vi and Vj.

An undirected graph and its adjacency matrix representation is shown in the following
figure.

in the above figure, we can see the mapping among the vertices (A, B, C, D, E) is
represented by using the adjacency matrix which is also shown in the figure.

There exists different adjacency matrices for the directed and undirected graph. In
directed graph, an entry Aij will be 1 only when there is an edge directed from Vi to Vj.

A directed graph and its adjacency matrix representation is shown in the following
figure.
Representation of weighted directed graph is different. Instead of filling the entry by
1, the Non- zero entries of the adjacency matrix are represented by the weight of
respective edges.

The weighted directed graph along with the adjacency matrix representation is shown
in the following figure.

Linked Representation
In the linked representation, an adjacency list is used to store the Graph into the
computer's memory.
Consider the undirected graph shown in the following figure and check the adjacency
list representation.

An adjacency list is maintained for each node present in the graph which stores the
node value and a pointer to the next adjacent node to the respective node. If all the
adjacent nodes are traversed then store the NULL in the pointer field of last node of
the list. The sum of the lengths of adjacency lists is equal to the twice of the number
of edges present in an undirected graph.

Consider the directed graph shown in the following figure and check the adjacency list
representation of the graph.

In a directed graph, the sum of lengths of all the adjacency lists is equal to the number
of edges present in the graph.
In the case of weighted directed graph, each node contains an extra field that is called
the weight of the node. The adjacency list representation of a directed graph is shown
in the following figure.
BFS algorithm
In this article, we will discuss the BFS algorithm in the data structure. Breadth-first search is a
graph traversal algorithm that starts traversing the graph from the root node and explores all
the neighboring nodes. Then, it selects the nearest node and explores all the unexplored nodes.
While using BFS for traversal, any node in the graph can be considered as the root node.

There are many ways to traverse the graph, but among them, BFS is the most commonly used
approach. It is a recursive algorithm to search all the vertices of a tree or graph data structure.
BFS puts every vertex of the graph into two categories - visited and non-visited. It selects a
single node in a graph and, after that, visits all the nodes adjacent to the selected node.

Applications of BFS algorithm


The applications of breadth-first-algorithm are given as follows -

o BFS can be used to find the neighboring locations from a given source location.
o In a peer-to-peer network, BFS algorithm can be used as a traversal method to find all
the neighboring nodes. Most torrent clients, such as BitTorrent, uTorrent, etc. employ
this process to find "seeds" and "peers" in the network.
o BFS can be used in web crawlers to create web page indexes. It is one of the main
algorithms that can be used to index web pages. It starts traversing from the source page
and follows the links associated with the page. Here, every web page is considered as a
node in the graph.
o BFS is used to determine the shortest path and minimum spanning tree.
o BFS is also used in Cheney's technique to duplicate the garbage collection.
o It can be used in ford-Fulkerson method to compute the maximum flow in a flow
network.

Algorithm
The steps involved in the BFS algorithm to explore a graph are given as follows -

Step 1: SET STATUS = 1 (ready state) for each node in G


Step 2: Enqueue the starting node A and set its STATUS = 2 (waiting state)
Step 3: Repeat Steps 4 and 5 until QUEUE is empty
Step 4: Dequeue a node N. Process it and set its STATUS = 3 (processed state).
Step 5: Enqueue all the neighbours of N that are in the ready state (whose STATUS = 1) and
set
their STATUS = 2
(waiting state)
[END OF LOOP]
Step 6: EXIT
Example of BFS algorithm
Now, let's understand the working of BFS algorithm by using an example. In the example given
below, there is a directed graph having 7 vertices.

In the above graph, minimum path 'P' can be found by using the BFS that will start from Node
A and end at Node E. The algorithm uses two queues, namely QUEUE1 and QUEUE2.
QUEUE1 holds all the nodes that are to be processed, while QUEUE2 holds all the nodes that
are processed and deleted from QUEUE1.
Now, let's start examining the graph starting from Node A.
Step 1 - First, add A to queue1 and NULL to queue2.
1. QUEUE1 = {A}
2. QUEUE2 = {NULL}
Step 2 - Now, delete node A from queue1 and add it into queue2. Insert all neighbors of node
A to queue1.
1. QUEUE1 = {B, D}
2. QUEUE2 = {A}
Step 3 - Now, delete node B from queue1 and add it into queue2. Insert all neighbors of node
B to queue1.
1. QUEUE1 = {D, C, F}
2. QUEUE2 = {A, B}
Step 4 - Now, delete node D from queue1 and add it into queue2. Insert all neighbors of node
D to queue1. The only neighbor of Node D is F since it is already inserted, so it will not be
inserted again.
1. QUEUE1 = {C, F}
2. QUEUE2 = {A, B, D}
Step 5 - Delete node C from queue1 and add it into queue2. Insert all neighbors of node C to
queue1.
1. QUEUE1 = {F, E, G}
2. QUEUE2 = {A, B, D, C}
Step 5 - Delete node F from queue1 and add it into queue2. Insert all neighbors of node F to
queue1. Since all the neighbors of node F are already present, we will not insert them again.
1. QUEUE1 = {E, G}
2. QUEUE2 = {A, B, D, C, F}
Step 6 - Delete node E from queue1. Since all of its neighbors have already been added, so we
will not insert them again. Now, all the nodes are visited, and the target node E is encountered
into queue2.
1. QUEUE1 = {G}
2. QUEUE2 = {A, B, D, C, F, E}
Complexity of BFS algorithm
Time complexity of BFS depends upon the data structure used to represent the graph. The time
complexity of BFS algorithm is O(V+E), since in the worst case, BFS algorithm explores every
node and edge. In a graph, the number of vertices is O(V), whereas the number of edges is
O(E).
The space complexity of BFS can be expressed as O(V), where V is the number of vertices.
Implementation of BFS algorithm
Now, let's see the implementation of BFS algorithm in java.
In this code, we are using the adjacency list to represent our graph. Implementing the Breadth-
First Search algorithm in Java makes it much easier to deal with the adjacency list since we
only have to travel through the list of nodes attached to each node once the node is dequeued
from the head (or start) of the queue.

In this example, the graph that we are using to demonstrate the code is given
as follows -
1. import java.io.*;
2. import java.util.*;
3. public class BFSTraversal
4. {
5. private int vertex; /* total number number of vertices in the graph */
6. private LinkedList<Integer> adj[]; /* adjacency list */
7. private Queue<Integer> que; /* maintaining a queue */
8. BFSTraversal(int v)
9. {
10. vertex = v;
11. adj = new LinkedList[vertex];
12. for (int i=0; i<v; i++)
13. {
14. adj[i] = new LinkedList<>();
15. }
16. que = new LinkedList<Integer>();
17. }
18. void insertEdge(int v,int w)
19. {
20. adj[v].add(w); /* adding an edge to the adjacency list (edges are bidirectional
in this example) */
21. }
22. void BFS(int n)
23. {
24. boolean nodes[] = new boolean[vertex]; /* initialize boolean array for holdi
ng the data */
25. int a = 0;
26. nodes[n]=true;
27. que.add(n); /* root node is added to the top of the queue */
28. while (que.size() != 0)
29. {
30. n = que.poll(); /* remove the top element of the queue */
31. System.out.print(n+" "); /* print the top element of the queue */
32. for (int i = 0; i < adj[n].size(); i++) /* iterate through the linked list and push
all neighbors into queue */
33. {
34. a = adj[n].get(i);
35. if (!nodes[a]) /* only insert nodes into queue if they have not been explor
ed already */
36. {
37. nodes[a] = true;
38. que.add(a);
39. }
40. }
41. }
42. }
43. public static void main(String args[])
44. {
45. BFSTraversal graph = new BFSTraversal(10);
46. graph.insertEdge(0, 1);
47. graph.insertEdge(0, 2);
48. graph.insertEdge(0, 3);
49. graph.insertEdge(1, 3);
50. graph.insertEdge(2, 4);
51. graph.insertEdge(3, 5);
52. graph.insertEdge(3, 6);
53. graph.insertEdge(4, 7);
54. graph.insertEdge(4, 5);
55. graph.insertEdge(5, 2);
56. graph.insertEdge(6, 5);
57. graph.insertEdge(7, 5);
58. graph.insertEdge(7, 8);
59. System.out.println("Breadth First Traversal for the graph is:");
60. graph.BFS(2);
61. }
62. }
Output
Depth First Search (DFS) Algorithm
Depth first search (DFS) algorithm starts with the initial node of the graph G, and then goes
to deeper and deeper until we find the goal node or the node which has no children. The
algorithm, then backtracks from the dead end towards the most recent node that is yet to be
completely unexplored.
The data structure which is being used in DFS is stack. The process is similar to BFS
algorithm. In DFS, the edges that leads to an unvisited node are called discovery edges while
the edges that leads to an already visited node are called block edges.
Algorithm
o Step 1: SET STATUS = 1 (ready state) for each node in G
o Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting state)
o Step 3: Repeat Steps 4 and 5 until STACK is empty
o Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state)
o Step 5: Push on the stack all the neighbours of N that are in the ready state (whose
STATUS = 1) and set their
STATUS = 2 (waiting state)
[END OF LOOP]
o Step 6: EXIT
Example :
Consider the graph G along with its adjacency list, given in the figure below. Calculate the
order to print all the nodes of the graph starting from node H, by using depth first search
(DFS) algorithm.
Solution :
Push H onto the stack
Hello Java Program for Beginners
1. STACK : H
POP the top element of the stack i.e. H, print it and push all the neighbours of H onto the
stack that are is ready state.
1. Print H
2. STACK : A
Pop the top element of the stack i.e. A, print it and push all the neighbours of A onto the stack
that are in ready state.
1. Print A
2. Stack : B, D
Pop the top element of the stack i.e. D, print it and push all the neighbours of D onto the stack
that are in ready state.
1. Print D
2. Stack : B, F
Pop the top element of the stack i.e. F, print it and push all the neighbours of F onto the stack
that are in ready state.
1. Print F
2. Stack : B
Pop the top of the stack i.e. B and push all the neighbours
1. Print B
2. Stack : C
Pop the top of the stack i.e. C and push all the neighbours.
1. Print C
2. Stack : E, G
Pop the top of the stack i.e. G and push all its neighbours.
1. Print G
2. Stack : E
Pop the top of the stack i.e. E and push all its neighbours.
1. Print E
2. Stack :
Hence, the stack now becomes empty and all the nodes of the graph have been traversed.
The printing sequence of the graph will be :
1. H → A → D → F → B → C → G → E
spanning tree
In this article, we will discuss the spanning tree and the minimum spanning tree. But
before moving directly towards the spanning tree, let's first see a brief description of
the graph and its types.

Graph
A graph can be defined as a group of vertices and edges to connect these vertices. The
types of graphs are given as follows -

o Undirected graph: An undirected graph is a graph in which all the edges do


not point to any particular direction, i.e., they are not unidirectional; they are
bidirectional. It can also be defined as a graph with a set of V vertices and a set
of E edges, each edge connecting two different vertices.
o Connected graph: A connected graph is a graph in which a path always exists
from a vertex to any other vertex. A graph is connected if we can reach any
vertex from any other vertex by following edges in either direction.
o Directed graph: Directed graphs are also known as digraphs. A graph is a
directed graph (or digraph) if all the edges present between any vertices or
nodes of the graph are directed or have a defined direction.

Now, let's move towards the topic spanning tree.

What is a spanning tree?


A spanning tree can be defined as the subgraph of an undirected connected graph. It
includes all the vertices along with the least possible number of edges. If any vertex is
missed, it is not a spanning tree. A spanning tree is a subset of the graph that does not
have cycles, and it also cannot be disconnected.

Competitive questions on Structures in Hindi


Keep Watching

A spanning tree consists of (n-1) edges, where 'n' is the number of vertices (or nodes).
Edges of the spanning tree may or may not have weights assigned to them. All the
possible spanning trees created from the given graph G would have the same number
of vertices, but the number of edges in the spanning tree would be equal to the
number of vertices in the given graph minus 1.
A complete undirected graph can have nn-2 number of spanning trees where n is the
number of vertices in the graph. Suppose, if n = 5, the number of maximum possible
spanning trees would be 55-2 = 125.

Applications of the spanning tree


Basically, a spanning tree is used to find a minimum path to connect all nodes of the
graph. Some of the common applications of the spanning tree are listed as follows -

o Cluster Analysis
o Civil network planning
o Computer network routing protocol

Now, let's understand the spanning tree with the help of an example.

Example of Spanning tree


Suppose the graph be -

As discussed above, a spanning tree contains the same number of vertices as the
graph, the number of vertices in the above graph is 5; therefore, the spanning tree will
contain 5 vertices. The edges in the spanning tree will be equal to the number of
vertices in the graph minus 1. So, there will be 4 edges in the spanning tree.
Some of the possible spanning trees that will be created from the above graph are
given as follows -

Properties of spanning-tree
Some of the properties of the spanning tree are given as follows -

o There can be more than one spanning tree of a connected graph G.


o A spanning tree does not have any cycles or loop.
o A spanning tree is minimally connected, so removing one edge from the tree
will make the graph disconnected.
o A spanning tree is maximally acyclic, so adding one edge to the tree will create
a loop.
o There can be a maximum nn-2 number of spanning trees that can be created
from a complete graph.
o A spanning tree has n-1 edges, where 'n' is the number of nodes.
o If the graph is a complete graph, then the spanning tree can be constructed by
removing maximum (e-n+1) edges, where 'e' is the number of edges and 'n' is
the number of vertices.

So, a spanning tree is a subset of connected graph G, and there is no spanning tree of
a disconnected graph.

Minimum Spanning tree


A minimum spanning tree can be defined as the spanning tree in which the sum of the
weights of the edge is minimum. The weight of the spanning tree is the sum of the
weights given to the edges of the spanning tree. In the real world, this weight can be
considered as the distance, traffic load, congestion, or any random value.

Example of minimum spanning tree


Let's understand the minimum spanning tree with the help of an example.

The sum of the edges of the above graph is 16. Now, some of the possible spanning
trees created from the above graph are -

So, the minimum spanning tree that is selected from the above spanning trees for the
given weighted graph is -
Applications of minimum spanning tree
The applications of the minimum spanning tree are given as follows -

o Minimum spanning tree can be used to design water-supply networks,


telecommunication networks, and electrical grids.
o It can be used to find paths in the map.

Algorithms for Minimum spanning tree


A minimum spanning tree can be found from a weighted graph by using the
algorithms given below -

o Prim's Algorithm
o Kruskal's Algorithm

Let's see a brief description of both of the algorithms listed above.

Prim's algorithm - It is a greedy algorithm that starts with an empty spanning tree. It
is used to find the minimum spanning tree from the graph. This algorithm finds the
subset of edges that includes every vertex of the graph such that the sum of the
weights of the edges can be minimized.

Kruskal's algorithm - This algorithm is also used to find the minimum spanning tree
for a connected weighted graph. Kruskal's algorithm also follows greedy approach,
which finds an optimum solution at every stage instead of focusing on a global
optimum.

So, that's all about the article. Hope the article will be helpful and informative to you.
Here, we have discussed spanning tree and minimum spanning tree along with their
properties, examples, and applications.
Prim’s Algorithm-

• Prim’s Algorithm is a famous greedy algorithm.


• It is used for finding the Minimum Spanning Tree (MST) of a given graph.
• To apply Prim’s algorithm, the given graph must be weighted, connected and
undirected.

Prim’s Algorithm Implementation-

The implementation of Prim’s Algorithm is explained in the following steps-

Step-01:

• Randomly choose any vertex.


• The vertex connecting to the edge having least weight is usually selected.

Step-02:

• Find all the edges that connect the tree to new vertices.
• Find the least weight edge among those edges and include it in the existing tree.
• If including that edge creates a cycle, then reject that edge and look for the next least
weight edge.

Step-03:

• Keep repeating step-02 until all the vertices are included and Minimum Spanning Tree
(MST) is obtained.

Prim’s Algorithm Time Complexity-

Worst case time complexity of Prim’s Algorithm is-


• O(ElogV) using binary heap
• O(E + VlogV) using Fibonacci heap
Time Complexity Analysis

• If adjacency list is used to represent the graph, then using breadth first search, all the
vertices can be traversed in O(V + E) time.
• We traverse all the vertices of graph using breadth first search and use a min heap for
storing the vertices not yet included in the MST.
• To get the minimum weight edge, we use min heap as a priority queue.
• Min heap operations like extracting minimum element and decreasing key value takes
O(logV) time.

So, overall time complexity


= O(E + V) x O(logV)
= O((E + V)logV)
= O(ElogV)

This time complexity can be improved and reduced to O(E + VlogV) using Fibonacci heap.

PRACTICE PROBLEMS BASED ON PRIM’S


ALGORITHM-

Problem-01:

Construct the minimum spanning tree (MST) for the given graph using Prim’s Algorithm-
Solution-

The above discussed steps are followed to find the minimum cost spanning tree using
Prim’s Algorithm-

Step-01:

Step-02:

Step-03:
Step-04:

Step-05:

Step-06:
Since all the vertices have been included in the MST, so we stop.

Now, Cost of Minimum Spanning Tree


= Sum of all edge weights
= 10 + 25 + 22 + 12 + 16 + 14
= 99 units

Problem-02:

Using Prim’s Algorithm, find the cost of minimum spanning tree (MST) of the given
graph-
Solution-

The minimum spanning tree obtained by the application of Prim’s Algorithm on the given
graph is as shown below-

Now, Cost of Minimum Spanning Tree


= Sum of all edge weights
= 1 + 4 + 2 + 6 + 3 + 10
= 26 units

Pseudocode for Prim’s Algorithm


Kruskal’s Algorithm-

• Kruskal’s Algorithm is a famous greedy algorithm.


• It is used for finding the Minimum Spanning Tree (MST) of a given graph.
• To apply Kruskal’s algorithm, the given graph must be weighted, connected and
undirected.

Kruskal’s Algorithm Implementation-

The implementation of Kruskal’s Algorithm is explained in the following steps-

Step-01:

• Sort all the edges from low weight to high weight.

Step-02:

• Take the edge with the lowest weight and use it to connect the vertices of graph.
• If adding an edge creates a cycle, then reject that edge and go for the next least
weight edge.

Step-03:

• Keep adding edges until all the vertices are connected and a Minimum Spanning Tree
(MST) is obtained.

Thumb Rule to Remember

The above steps may be reduced to the following thumb rule-


• Simply draw all the vertices on the paper.
• Connect these vertices using edges with minimum weights such that no cycle gets formed.
Kruskal’s Algorithm Time Complexity-

Worst case time complexity of Kruskal’s Algorithm


= O(ElogV) or O(ElogE)

Analysis-

• The edges are maintained as min heap.


• The next edge can be obtained in O(logE) time if graph has E edges.
• Reconstruction of heap takes O(E) time.
• So, Kruskal’s Algorithm takes O(ElogE) time.
• The value of E can be at most O(V2).
• So, O(logV) and O(logE) are same.

Special Case-

• If the edges are already sorted, then there is no need to construct min heap.
• So, deletion from min heap time is saved.
• In this case, time complexity of Kruskal’s Algorithm = O(E + V)

PRACTICE PROBLEMS BASED ON KRUSKAL’S


ALGORITHM-

Problem-01:

Construct the minimum spanning tree (MST) for the given graph using Kruskal’s
Algorithm-
Solution-

To construct MST using Kruskal’s Algorithm,


• Simply draw all the vertices on the paper.
• Connect these vertices using edges with minimum weights such that no cycle gets
formed.

Step-01:

Step-02:
Step-03:

Step-04:

Step-05:
Step-06:

Step-07:

Since all the vertices have been connected / included in the MST, so we stop.
Weight of the MST
= Sum of all edge weights
= 10 + 25 + 22 + 12 + 16 + 14
= 99 units

Pseudocode for Kruskal’s Algorithm


Single Source Shortest Path in a directed
Acyclic Graphs
By relaxing the edges of a weighted DAG (Directed Acyclic Graph) G = (V, E) according
to a topological sort of its vertices, we can figure out shortest paths from a single
source in ∅(V+E) time. Shortest paths are always well described in a dag, since even if
there are negative-weight edges, no negative-weight cycles can exist.

DAG - SHORTEST - PATHS (G, w, s)


1. Topologically sort the vertices of G.
2. INITIALIZE - SINGLE- SOURCE (G, s)
3. for each vertex u taken in topologically sorted order
4. do for each vertex v ∈ Adj [u]
5. do RELAX (u, v, w)

The running time of this data is determined by line 1 and by the for loop of lines 3 - 5.
The topological sort can be implemented in ∅ (V + E) time. In the for loop of lines 3 -
5, as in Dijkstra's algorithm, there is one repetition per vertex. For each vertex, the
edges that leave the vertex are each examined exactly once. Unlike Dijkstra's algorithm,
we use only O (1) time per edge. The running time is thus ∅ (V + E), which is linear in
the size of an adjacency list depiction of the graph.

Example:

Step1: To topologically sort vertices apply DFS (Depth First Search) and then arrange
vertices in linear order by decreasing order of finish time.

23.2M
437
Prime Ministers of India | List of Prime Minister of India (1947-2020)
Now, take each vertex in topologically sorted order and relax each edge.

1. adj [s] →t, x


2. 0 + 3 < ∞
3. d [t] ← 3
4. 0 + 2 < ∞
5. d [x] ← 2
1. adj [t] → r, x
2. 3 + 1 < ∞
3. d [r] ← 4
4. 3 + 5 ≤ 2

1. adj [x] → y
2. 2 - 3 < ∞
3. d [y] ← -1

1. adj [y] → r
2. -1 + 4 < 4
3. 3 <4
4. d [r] ← 3
Thus the Shortest Path is:

1. s to x is 2
2. s to y is -1
3. s to t is 3
4. s to r is 3
Bellman Ford Algorithm
Bellman ford algorithm is a single-source shortest path algorithm. This algorithm is used to
find the shortest distance from the single vertex to all the other vertices of a weighted graph.
There are various other algorithms used to find the shortest path like Dijkstra algorithm, etc. If
the weighted graph contains the negative weight values, then the Dijkstra algorithm does not
confirm whether it produces the correct answer or not. In contrast to Dijkstra algorithm,
bellman ford algorithm guarantees the correct answer even if the weighted graph contains the
negative weight values.
Rule of this algorithm
1. We will go on relaxing all the edges (n - 1) times where,
2. n = number of vertices
Consider the below graph:

As we can observe in the above graph that some of the weights are negative. The above graph
contains 6 vertices so we will go on relaxing till the 5 vertices. Here, we will relax all the edges
5 times. The loop will iterate 5 times to get the correct answer. If the loop is iterated more than
5 times then also the answer will be the same, i.e., there would be no change in the distance
between the vertices.
Relaxing means:
1. If (d(u) + c(u , v) < d(v))
2. d(v) = d(u) + c(u , v)
To find the shortest path of the above graph, the first step is note down all the edges which
are given below:
(A, B), (A, C), (A, D), (B, E), (C, E), (D, C), (D, F), (E, F), (C, B)
Let's consider the source vertex as 'A'; therefore, the distance value at vertex A is 0 and the
distance value at all the other vertices as infinity shown as below:

Since the graph has six vertices so it will have five iterations.
First iteration
Consider the edge (A, B). Denote vertex 'A' as 'u' and vertex 'B' as 'v'. Now use the relaxing
formula:
d(u) = 0
d(v) = ∞
c(u , v) = 6
Since (0 + 6) is less than ∞, so update
1. d(v) = d(u) + c(u , v)
d(v) = 0 + 6 = 6
Therefore, the distance of vertex B is 6.
Consider the edge (A, C). Denote vertex 'A' as 'u' and vertex 'C' as 'v'. Now use the relaxing
formula:
d(u) = 0
d(v) = ∞
c(u , v) = 4
Since (0 + 4) is less than ∞, so update

1. d(v) = d(u) + c(u , v)


d(v) = 0 + 4 = 4
Therefore, the distance of vertex C is 4.
Consider the edge (A, D). Denote vertex 'A' as 'u' and vertex 'D' as 'v'. Now use the relaxing
formula:
d(u) = 0
d(v) = ∞
c(u , v) = 5
Since (0 + 5) is less than ∞, so update

1. d(v) = d(u) + c(u , v)


d(v) = 0 + 5 = 5
Therefore, the distance of vertex D is 5.
Consider the edge (B, E). Denote vertex 'B' as 'u' and vertex 'E' as 'v'. Now use the relaxing
formula:
d(u) = 6
d(v) = ∞
c(u , v) = -1
Since (6 - 1) is less than ∞, so update

1. d(v) = d(u) + c(u , v)


d(v) = 6 - 1= 5
Therefore, the distance of vertex E is 5.
Consider the edge (C, E). Denote vertex 'C' as 'u' and vertex 'E' as 'v'. Now use the relaxing
formula:
d(u) = 4
d(v) = 5
c(u , v) = 3
Since (4 + 3) is greater than 5, so there will be no updation. The value at vertex E is 5.
Consider the edge (D, C). Denote vertex 'D' as 'u' and vertex 'C' as 'v'. Now use the relaxing
formula:
d(u) = 5
d(v) = 4
c(u , v) = -2
Since (5 -2) is less than 4, so update

1. d(v) = d(u) + c(u , v)


d(v) = 5 - 2 = 3
Therefore, the distance of vertex C is 3.
Consider the edge (D, F). Denote vertex 'D' as 'u' and vertex 'F' as 'v'. Now use the relaxing
formula:
d(u) = 5
d(v) = ∞
c(u , v) = -1
Since (5 -1) is less than ∞, so update

1. d(v) = d(u) + c(u , v)


d(v) = 5 - 1 = 4
Therefore, the distance of vertex F is 4.
Consider the edge (E, F). Denote vertex 'E' as 'u' and vertex 'F' as 'v'. Now use the relaxing
formula:
d(u) = 5
d(v) = ∞
c(u , v) = 3
Since (5 + 3) is greater than 4, so there would be no updation on the distance value of vertex
F.
Consider the edge (C, B). Denote vertex 'C' as 'u' and vertex 'B' as 'v'. Now use the relaxing
formula:
d(u) = 3
d(v) = 6
c(u , v) = -2
Since (3 - 2) is less than 6, so update

1. d(v) = d(u) + c(u , v)


d(v) = 3 - 2 = 1
Therefore, the distance of vertex B is 1.
Now the first iteration is completed. We move to the second iteration.
Second iteration:
In the second iteration, we again check all the edges. The first edge is (A, B). Since (0 + 6) is
greater than 1 so there would be no updation in the vertex B.
The next edge is (A, C). Since (0 + 4) is greater than 3 so there would be no updation in the
vertex C.
The next edge is (A, D). Since (0 + 5) equals to 5 so there would be no updation in the vertex
D.
The next edge is (B, E). Since (1 - 1) equals to 0 which is less than 5 so update:
d(v) = d(u) + c(u, v)
d(E) = d(B) +c(B , E)
=1-1=0
The next edge is (C, E). Since (3 + 3) equals to 6 which is greater than 5 so there would be no
updation in the vertex E.
The next edge is (D, C). Since (5 - 2) equals to 3 so there would be no updation in the vertex
C.
The next edge is (D, F). Since (5 - 1) equals to 4 so there would be no updation in the vertex
F.
The next edge is (E, F). Since (5 + 3) equals to 8 which is greater than 4 so there would be no
updation in the vertex F.
The next edge is (C, B). Since (3 - 2) equals to 1` so there would be no updation in the vertex
B.
Third iteration
We will perform the same steps as we did in the previous iterations. We will observe that
there will be no updation in the distance of vertices.

1. The following are the distances of vertices:


2. A: 0
3. B: 1
4. C: 3
5. D: 5
6. E: 0
7. F: 3
Time Complexity
The time complexity of Bellman ford algorithm would be O(E|V| - 1).
1. function bellmanFord(G, S)
2. for each vertex V in G
3. distance[V] <- infinite
4. previous[V] <- NULL
5. distance[S] <- 0
6.
7. for each vertex V in G
8. for each edge (U,V) in G
9. tempDistance <- distance[U] + edge_weight(U, V)
10. if tempDistance < distance[V]
11. distance[V] <- tempDistance
12. previous[V] <- U
13.
14. for each edge (U,V) in G
15. If distance[U] + edge_weight(U, V) < distance[V}
16. Error: Negative Cycle Exists
17.
18. return distance[], previous[]
Drawbacks of Bellman ford algorithm
o The bellman ford algorithm does not produce a correct answer if the sum of the edges
of a cycle is negative. Let's understand this property through an example. Consider the
below graph.

o In the above graph, we consider vertex 1 as the source vertex and provides 0 value to
it. We provide infinity value to other vertices shown as below:

Edges can be written as:


(1, 3), (1, 2), (2, 4), (3, 2), (4, 3)
First iteration
Consider the edge (1, 3). Denote vertex '1' as 'u' and vertex '3' as 'v'. Now use the
relaxing formula:
d(u) = 0
d(v) = ∞
c(u , v) = 5
Since (0 + 5) is less than ∞, so update
1. d(v) = d(u) + c(u , v)
d(v) = 0 + 5 = 5
Therefore, the distance of vertex 3 is 5.
Consider the edge (1, 2). Denote vertex '1' as 'u' and vertex '2' as 'v'. Now use the
relaxing formula:
d(u) = 0
d(v) = ∞
c(u , v) = 4
Since (0 + 4) is less than ∞, so update

1. d(v) = d(u) + c(u , v)


d(v) = 0 + 4 = 4
Therefore, the distance of vertex 2 is 4.
Consider the edge (3, 2). Denote vertex '3' as 'u' and vertex '2' as 'v'. Now use the
relaxing formula:
d(u) = 5
d(v) = 4
c(u , v) = 7
Since (5 + 7) is greater than 4, so there would be no updation in the vertex 2.
Consider the edge (2, 4). Denote vertex '2' as 'u' and vertex '4' as 'v'. Now use the
relaxing formula:
d(u) = 4
d(v) = ∞
c(u , v) = 7
Since (4 + 7) equals to 11 which is less than ∞, so update

1. d(v) = d(u) + c(u , v)


d(v) = 4 + 7 = 11
Therefore, the distance of vertex 4 is 11.
Consider the edge (4, 3). Denote vertex '4' as 'u' and vertex '3' as 'v'. Now use the
relaxing formula:
d(u) = 11
d(v) = 5
c(u , v) = -15
Since (11 - 15) equals to -4 which is less than 5, so update

1. d(v) = d(u) + c(u , v)


d(v) = 11 - 15 = -4
Therefore, the distance of vertex 3 is -4.
Now we move to the second iteration.
Second iteration
Now, again we will check all the edges. The first edge is (1, 3). Since (0 + 5) equals to 5
which is greater than -4 so there would be no updation in the vertex 3.
The next edge is (1, 2). Since (0 + 4) equals to 4 so there would be no updation in the vertex
2.
The next edge is (3, 2). Since (-4 + 7) equals to 3 which is less than 4 so update:
d(v) = d(u) + c(u, v)
d(2) = d(3) +c(3, 2)
= -4 + 7 = 3
Therefore, the value at vertex 2 is 3.
The next edge is (2, 4). Since ( 3+7) equals to 10 which is less than 11 so update
d(v) = d(u) + c(u, v)
d(4) = d(2) +c(2, 4)
= 3 + 7 = 10
Therefore, the value at vertex 4 is 10.
The next edge is (4, 3). Since (10 - 15) equals to -5 which is less than -4 so update:
d(v) = d(u) + c(u, v)
d(3) = d(4) +c(4, 3)
= 10 - 15 = -5
Therefore, the value at vertex 3 is -5.
Now we move to the third iteration.
Third iteration
Now again we will check all the edges. The first edge is (1, 3). Since (0 + 5) equals to 5
which is greater than -5 so there would be no updation in the vertex 3.
The next edge is (1, 2). Since (0 + 4) equals to 4 which is greater than 3 so there would be no
updation in the vertex 2.
The next edge is (3, 2). Since (-5 + 7) equals to 2 which is less than 3 so update:
d(v) = d(u) + c(u, v)
d(2) = d(3) +c(3, 2)
= -5 + 7 = 2
Therefore, the value at vertex 2 is 2.
The next edge is (2, 4). Since (2 + 7) equals to 9 which is less than 10 so update:
d(v) = d(u) + c(u, v)
d(4) = d(2) +c(2, 4)
=2+7=9
Therefore, the value at vertex 4 is 9.
The next edge is (4, 3). Since (9 - 15) equals to -6 which is less than -5 so update:
d(v) = d(u) + c(u, v)
d(3) = d(4) +c(4, 3)
= 9 - 15 = -6
Therefore, the value at vertex 3 is -6.

Since the graph contains 4 vertices, so according to the bellman ford algorithm, there would
be only 3 iterations. If we try to perform 4th iteration on the graph, the distance of the vertices
from the given vertex should not change. If the distance varies, it means that the bellman ford
algorithm is not providing the correct answer.
4th iteration
The first edge is (1, 3). Since (0 +5) equals to 5 which is greater than -6 so there would be no
change in the vertex 3.
The next edge is (1, 2). Since (0 + 4) is greater than 2 so there would be no updation.
The next edge is (3, 2). Since (-6 + 7) equals to 1 which is less than 3 so update:
d(v) = d(u) + c(u, v)
d(2) = d(3) +c(3, 2)
= -6 + 7 = 1
In this case, the value of the vertex is updated. So, we conclude that the bellman ford
algorithm does not work when the graph contains the negative weight cycle.
Therefore, the value at vertex 2 is 1.
Dijkstra Algorithm
Dijkstra algorithm is a single-source shortest path algorithm. Here, single-source
means that only one source is given, and we have to find the shortest path from the
source to all the nodes.

Let's understand the working of Dijkstra's algorithm. Consider the below graph.

First, we have to consider any vertex as a source vertex. Suppose we consider vertex 0
as a source vertex.

Here we assume that 0 as a source vertex, and distance to all the other vertices is
infinity. Initially, we do not know the distances. First, we will find out the vertices which
are directly connected to the vertex 0. As we can observe in the above graph that two
vertices are directly connected to vertex 0.
Let's assume that the vertex 0 is represented by 'x' and the vertex 1 is represented by
'y'. The distance between the vertices can be calculated by using the below formula:

d(x, y) = d(x) + c(x, y) < d(y)

= (0 + 4) < ∞

=4<∞

Since 4<∞ so we will update d(v) from ∞ to 4.

Therefore, we come to the conclusion that the formula for calculating the distance
between the vertices:

{if( d(u) + c(u, v) < d(v))

d(v) = d(u) +c(u, v) }

Now we consider vertex 0 same as 'x' and vertex 4 as 'y'.

d(x, y) = d(x) + c(x, y) < d(y)

= (0 + 8) < ∞

=8<∞

Therefore, the value of d(y) is 8. We replace the infinity value of vertices 1 and 4 with
the values 4 and 8 respectively. Now, we have found the shortest path from the vertex
0 to 1 and 0 to 4. Therefore, vertex 0 is selected. Now, we will compare all the vertices
except the vertex 0. Since vertex 1 has the lowest value, i.e., 4; therefore, vertex 1 is
selected.

Since vertex 1 is selected, so we consider the path from 1 to 2, and 1 to 4. We will not
consider the path from 1 to 0 as the vertex 0 is already selected.

First, we calculate the distance between the vertex 1 and 2. Consider the vertex 1 as 'x',
and the vertex 2 as 'y'.

d(x, y) = d(x) + c(x, y) < d(y)

= (4 + 8) < ∞
= 12 < ∞

Since 12<∞ so we will update d(2) from ∞ to 12.

Now, we calculate the distance between the vertex 1 and vertex 4. Consider the vertex
1 as 'x' and the vertex 4 as 'y'.

d(x, y) = d(x) + c(x, y) < d(y)

= (4 + 11) < 8

= 15 < 8

Since 15 is not less than 8, we will not update the value d(4) from 8 to 12.

Till now, two nodes have been selected, i.e., 0 and 1. Now we have to compare the
nodes except the node 0 and 1. The node 4 has the minimum distance, i.e., 8. Therefore,
vertex 4 is selected.

Since vertex 4 is selected, so we will consider all the direct paths from the vertex 4. The
direct paths from vertex 4 are 4 to 0, 4 to 1, 4 to 8, and 4 to 5. Since the vertices 0 and
1 have already been selected so we will not consider the vertices 0 and 1. We will
consider only two vertices, i.e., 8 and 5.

First, we consider the vertex 8. First, we calculate the distance between the vertex 4
and 8. Consider the vertex 4 as 'x', and the vertex 8 as 'y'.

d(x, y) = d(x) + c(x, y) < d(y)

= (8 + 7) < ∞

= 15 < ∞

Since 15 is less than the infinity so we update d(8) from infinity to 15.

Now, we consider the vertex 5. First, we calculate the distance between the vertex 4
and 5. Consider the vertex 4 as 'x', and the vertex 5 as 'y'.

d(x, y) = d(x) + c(x, y) < d(y)

= (8 + 1) < ∞

=9<∞

Since 5 is less than the infinity, we update d(5) from infinity to 9.


Till now, three nodes have been selected, i.e., 0, 1, and 4. Now we have to compare the
nodes except the nodes 0, 1 and 4. The node 5 has the minimum value, i.e., 9.
Therefore, vertex 5 is selected.

Since the vertex 5 is selected, so we will consider all the direct paths from vertex 5. The
direct paths from vertex 5 are 5 to 8, and 5 to 6.

First, we consider the vertex 8. First, we calculate the distance between the vertex 5
and 8. Consider the vertex 5 as 'x', and the vertex 8 as 'y'.

d(x, y) = d(x) + c(x, y) < d(y)

= (9 + 15) < 15

= 24 < 15

Since 24 is not less than 15 so we will not update the value d(8) from 15 to 24.

Now, we consider the vertex 6. First, we calculate the distance between the vertex 5
and 6. Consider the vertex 5 as 'x', and the vertex 6 as 'y'.

d(x, y) = d(x) + c(x, y) < d(y)

= (9 + 2) < ∞

= 11 < ∞

Since 11 is less than infinity, we update d(6) from infinity to 11.

Till now, nodes 0, 1, 4 and 5 have been selected. We will compare the nodes except
the selected nodes. The node 6 has the lowest value as compared to other nodes.
Therefore, vertex 6 is selected.

Since vertex 6 is selected, we consider all the direct paths from vertex 6. The direct
paths from vertex 6 are 6 to 2, 6 to 3, and 6 to 7.

First, we consider the vertex 2. Consider the vertex 6 as 'x', and the vertex 2 as 'y'.

d(x, y) = d(x) + c(x, y) < d(y)

= (11 + 4) < 12

= 15 < 12

Since 15 is not less than 12, we will not update d(2) from 12 to 15
Now we consider the vertex 3. Consider the vertex 6 as 'x', and the vertex 3 as 'y'.

d(x, y) = d(x) + c(x, y) < d(y)

= (11 + 14) < ∞

= 25 < ∞

Since 25 is less than ∞, so we will update d(3) from ∞ to 25.

Now we consider the vertex 7. Consider the vertex 6 as 'x', and the vertex 7 as 'y'.

d(x, y) = d(x) + c(x, y) < d(y)

= (11 + 10) < ∞

= 22 < ∞

Since 22 is less than ∞ so, we will update d(7) from ∞ to 22.

Till now, nodes 0, 1, 4, 5, and 6 have been selected. Now we have to compare all the
unvisited nodes, i.e., 2, 3, 7, and 8. Since node 2 has the minimum value, i.e., 12 among
all the other unvisited nodes. Therefore, node 2 is selected.

Since node 2 is selected, so we consider all the direct paths from node 2. The direct
paths from node 2 are 2 to 8, 2 to 6, and 2 to 3.

First, we consider the vertex 8. Consider the vertex 2 as 'x' and 8 as 'y'.

d(x, y) = d(x) + c(x, y) < d(y)

= (12 + 2) < 15

= 14 < 15

Since 14 is less than 15, we will update d(8) from 15 to 14.

Now, we consider the vertex 6. Consider the vertex 2 as 'x' and 6 as 'y'.

d(x, y) = d(x) + c(x, y) < d(y)

= (12 + 4) < 11

= 16 < 11

Since 16 is not less than 11 so we will not update d(6) from 11 to 16.
Now, we consider the vertex 3. Consider the vertex 2 as 'x' and 3 as 'y'.

d(x, y) = d(x) + c(x, y) < d(y)

= (12 + 7) < 25

= 19 < 25

Since 19 is less than 25, we will update d(3) from 25 to 19.

Till now, nodes 0, 1, 2, 4, 5, and 6 have been selected. We compare all the unvisited
nodes, i.e., 3, 7, and 8. Among nodes 3, 7, and 8, node 8 has the minimum value. The
nodes which are directly connected to node 8 are 2, 4, and 5. Since all the directly
connected nodes are selected so we will not consider any node for the updation.

The unvisited nodes are 3 and 7. Among the nodes 3 and 7, node 3 has the minimum
value, i.e., 19. Therefore, the node 3 is selected. The nodes which are directly connected
to the node 3 are 2, 6, and 7. Since the nodes 2 and 6 have been selected so we will
consider these two nodes.

Now, we consider the vertex 7. Consider the vertex 3 as 'x' and 7 as 'y'.

d(x, y) = d(x) + c(x, y) < d(y)

= (19 + 9) < 21

= 28 < 21

Since 28 is not less than 21, so we will not update d(7) from 28 to 21.

Let's consider the directed graph.


Here, we consider A as a source vertex. A vertex is a source vertex so entry is filled with
0 while other vertices filled with ∞. The distance from source vertex to source vertex is
0, and the distance from the source vertex to other vertices is ∞.

We will solve this problem using the below table:

A B C D E

∞ ∞ ∞ ∞ ∞

Since 0 is the minimum value in the above table, so we select vertex A and added in
the second row shown as below:

A B C D E

A 0 ∞ ∞ ∞ ∞

As we can observe in the above graph that there are two vertices directly connected
to the vertex A, i.e., B and C. The vertex A is not directly connected to the vertex E, i.e.,
the edge is from E to A. Here we can calculate the two distances, i.e., from A to B and
A to C. The same formula will be used as in the previous problem.

1. If(d(x) + c(x, y) < d(y))


2. Then we update d(y) = d(x) + c(x, y)

A B C D E

A 0 ∞ ∞ ∞ ∞

10 5 ∞ ∞

As we can observe in the third row that 5 is the lowest value so vertex C will be added
in the third row.
We have calculated the distance of vertices B and C from A. Now we will compare the
vertices to find the vertex with the lowest value. Since the vertex C has the minimum
value, i.e., 5 so vertex C will be selected.

Since the vertex C is selected, so we consider all the direct paths from the vertex C. The
direct paths from the vertex C are C to B, C to D, and C to E.

First, we consider the vertex B. We calculate the distance from C to B. Consider vertex
C as 'x' and vertex B as 'y'.

d(x, y) = d(x) + c(x, y) < d(y)

= (5 + 3) < ∞

=8<∞

Since 8 is less than the infinity so we update d(B) from ∞ to 8. Now the new row will
be inserted in which value 8 will be added under the B column.

A B C D E

A 0 ∞ ∞ ∞ ∞

10 5 ∞ ∞

We consider the vertex D. We calculate the distance from C to D. Consider vertex C as


'x' and vertex D as 'y'.

d(x, y) = d(x) + c(x, y) < d(y)

= (5 + 9) < ∞

= 14 < ∞

Since 14 is less than the infinity so we update d(D) from ∞ to 14. The value 14 will be
added under the D column.

A B C D E

A 0 ∞ ∞ ∞ ∞

C 10 5 ∞ ∞
8 14

We consider the vertex E. We calculate the distance from C to E. Consider vertex C as


'x' and vertex E as 'y'.

d(x, y) = d(x) + c(x, y) < d(y)

= (5 + 2) < ∞

=7<∞

Since 14 is less than the infinity so we update d(D) from ∞ to 14. The value 14 will be
added under the D column.

A B C D E

A 0 ∞ ∞ ∞ ∞

C 10 5 ∞ ∞

8 14 7

As we can observe in the above table that 7 is the minimum value among 8, 14, and 7.
Therefore, the vertex E is added on the left as shown in the below table:

A B C D E

A 0 ∞ ∞ ∞ ∞

C 10 5 ∞ ∞

E 8 14 7

The vertex E is selected so we consider all the direct paths from the vertex E. The direct
paths from the vertex E are E to A and E to D. Since the vertex A is selected, so we will
not consider the path from E to A.

Consider the path from E to D.

d(x, y) = d(x) + c(x, y) < d(y)

= (7 + 6) < 14
= 13 < 14

Since 13 is less than the infinity so we update d(D) from ∞ to 13. The value 13 will be
added under the D column.

A B C D E

A 0 ∞ ∞ ∞ ∞

C 10 5 ∞ ∞

E 8 14 7

B 8 13

The value 8 is minimum among 8 and 13. Therefore, vertex B is selected. The direct
path from B is B to D.

d(x, y) = d(x) + c(x, y) < d(y)

= (8 + 1) < 13

= 9 < 13

Since 9 is less than 13 so we update d(D) from 13 to 9. The value 9 will be added under
the D column.

A B C D E

A 0 ∞ ∞ ∞ ∞

C 10 5 ∞ ∞

E 8 14 7

B 8 13

D 9
Dynamic Programming
Dynamic programming is a technique that breaks the problems into sub-problems,
and saves the result for future purposes so that we do not need to compute the result
again. The subproblems are optimized to optimize the overall solution is known as
optimal substructure property. The main use of dynamic programming is to solve
optimization problems. Here, optimization problems mean that when we are trying to
find out the minimum or the maximum solution of a problem. The dynamic
programming guarantees to find the optimal solution of a problem if the solution
exists.

The definition of dynamic programming says that it is a technique for solving a


complex problem by first breaking into a collection of simpler subproblems, solving
each subproblem just once, and then storing their solutions to avoid repetitive
computations.

Let's understand this approach through an example.

0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ,…

The numbers in the above series are not randomly calculated. Mathematically, we
could write each of the terms using the below formula:

F(n) = F(n-1) + F(n-2),

With the base values F(0) = 0, and F(1) = 1. To calculate the other numbers, we follow
the above relationship. For example, F(2) is the sum f(0) and f(1), which is equal to 1.

How can we calculate F(20)?


The F(20) term will be calculated using the nth formula of the Fibonacci series. The
below figure shows that how F(20) is calculated.
As we can observe in the above figure that F(20) is calculated as the sum of F(19) and F(18).
In the dynamic programming approach, we try to divide the problem into the similar
subproblems. We are following this approach in the above case where F(20) into the similar
subproblems, i.e., F(19) and F(18). If we recap the definition of dynamic programming that it
says the similar subproblem should not be computed more than once. Still, in the above case,
the subproblem is calculated twice. In the above example, F(18) is calculated two times;
similarly, F(17) is also calculated twice. However, this technique is quite useful as it solves the
similar subproblems, but we need to be cautious while storing the results because we are not
particular about storing the result that we have computed once, then it can lead to a wastage of
resources.

In the above example, if we calculate the F(18) in the right subtree, then it leads to the
tremendous usage of resources and decreases the overall performance.

The solution to the above problem is to save the computed results in an array. First, we calculate
F(16) and F(17) and save their values in an array. The F(18) is calculated by summing the
values of F(17) and F(16), which are already saved in an array. The computed value of F(18)
is saved in an array. The value of F(19) is calculated using the sum of F(18), and F(17), and
their values are already saved in an array. The computed value of F(19) is stored in an array.
The value of F(20) can be calculated by adding the values of F(19) and F(18), and the values
of both F(19) and F(18) are stored in an array. The final computed value of F(20) is stored in
an array.

How does the dynamic programming approach work?


The following are the steps that the dynamic programming follows:

o It breaks down the complex problem into simpler subproblems.


o It finds the optimal solution to these sub-problems.
o It stores the results of subproblems (memoization). The process of storing the
results of subproblems is known as memorization.
o It reuses them so that same sub-problem is calculated more than once.
o Finally, calculate the result of the complex problem.

The above five steps are the basic steps for dynamic programming. The dynamic
programming is applicable that are having properties such as:

Those problems that are having overlapping subproblems and optimal substructures.
Here, optimal substructure means that the solution of optimization problems can be
obtained by simply combining the optimal solution of all the subproblems.

In the case of dynamic programming, the space complexity would be increased as we


are storing the intermediate results, but the time complexity would be decreased.

Approaches of dynamic programming


There are two approaches to dynamic programming:

o Top-down approach
o Bottom-up approach

Top-down approach
The top-down approach follows the memorization technique, while bottom-up
approach follows the tabulation method. Here memorization is equal to the sum of
recursion and caching. Recursion means calling the function itself, while caching means
storing the intermediate results.

Advantages

o It is very easy to understand and implement.


o It solves the subproblems only when it is required.
o It is easy to debug.

Disadvantages

It uses the recursion technique that occupies more memory in the call stack.
Sometimes when the recursion is too deep, the stack overflow condition will occur.

It occupies more memory that degrades the overall performance.


Let's understand dynamic programming through an example.

1. int fib(int n)
2. {
3. if(n<0)
4. error;
5. if(n==0)
6. return 0;
7. if(n==1)
8. return 1;
9. sum = fib(n-1) + fib(n-2);
10. }

In the above code, we have used the recursive approach to find out the Fibonacci
series. When the value of 'n' increases, the function calls will also increase, and
computations will also increase. In this case, the time complexity increases
exponentially, and it becomes 2n.

One solution to this problem is to use the dynamic programming approach. Rather
than generating the recursive tree again and again, we can reuse the previously
calculated value. If we use the dynamic programming approach, then the time
complexity would be O(n).

When we apply the dynamic programming approach in the implementation of the


Fibonacci series, then the code would look like:

1. static int count = 0;


2. int fib(int n)
3. {
4. if(memo[n]!= NULL)
5. return memo[n];
6. count++;
7. if(n<0)
8. error;
9. if(n==0)
10. return 0;
11. if(n==1)
12. return 1;
13. sum = fib(n-1) + fib(n-2);
14. memo[n] = sum;
15. }

In the above code, we have used the memorization technique in which we store the
results in an array to reuse the values. This is also known as a top-down approach in
which we move from the top and break the problem into sub-problems.

Bottom-Up approach
The bottom-up approach is also one of the techniques which can be used to
implement the dynamic programming. It uses the tabulation technique to implement
the dynamic programming approach. It solves the same kind of problems but it
removes the recursion. If we remove the recursion, there is no stack overflow issue and
no overhead of the recursive functions. In this tabulation technique, we solve the
problems and store the results in a matrix.

There are two ways of applying dynamic programming:

o Top-Down
o Bottom-Up

The bottom-up is the approach used to avoid the recursion, thus saving the memory
space. The bottom-up is an algorithm that starts from the beginning, whereas the
recursive algorithm starts from the end and works backward. In the bottom-up
approach, we start from the base case to find the answer for the end. As we know, the
base cases in the Fibonacci series are 0 and 1. Since the bottom approach starts from
the base cases, so we will start from 0 and 1.

Key points

o We solve all the smaller sub-problems that will be needed to solve the larger
sub-problems then move to the larger problems using smaller sub-problems.
o We use for loop to iterate over the sub-problems.
o The bottom-up approach is also known as the tabulation or table filling method.

Let's understand through an example.

Suppose we have an array that has 0 and 1 values at a[0] and a[1] positions,
respectively shown as below:
Since the bottom-up approach starts from the lower values, so the values at a[0] and
a[1] are added to find the value of a[2] shown as below:

The value of a[3] will be calculated by adding a[1] and a[2], and it becomes 2 shown
as below:

The value of a[4] will be calculated by adding a[2] and a[3], and it becomes 3 shown
as below:

The value of a[5] will be calculated by adding the values of a[4] and a[3], and it becomes
5 shown as below:

The code for implementing the Fibonacci series using the bottom-up approach is given
below:

1. int fib(int n)
2. {
3. int A[];
4. A[0] = 0, A[1] = 1;
5. for( i=2; i<=n; i++)
6. {
7. A[i] = A[i-1] + A[i-2]
8. }
9. return A[n];
10. }

In the above code, base cases are 0 and 1 and then we have used for loop to find other
values of Fibonacci series.

Let's understand through the diagrammatic representation.

Initially, the first two values, i.e., 0 and 1 can be represented as:

When i=2 then the values 0 and 1 are added shown as below:

When i=3 then the values 1and 1 are added shown as below:
When i=4 then the values 2 and 1 are added shown as below:

When i=5, then the values 3 and 2 are added shown as below:
In the above case, we are starting from the bottom and reaching to the top.

Matrix Chain Multiplication


It is a Method under Dynamic Programming in which previous output is taken as input
for next.

Here, Chain means one matrix's column is equal to the second matrix's row [always].

In general:

If A = ⌊aij⌋ is a p x q matrix
B = ⌊bij⌋ is a q x r matrix
C = ⌊cij⌋ is a p x r matrix

Then

33.5M
755
Features of Java - Javatpoint

Given following matrices {A1,A2,A3,...An} and we have to perform the matrix


multiplication, which can be accomplished by a series of matrix multiplications

A1 xA2 x,A3 x.....x An


Matrix Multiplication operation is associative in nature rather commutative. By this,
we mean that we have to follow the above matrix order for multiplication but we are
free to parenthesize the above multiplication depending upon our need.

In general, for 1≤ i≤ p and 1≤ j ≤ r

It can be observed that the total entries in matrix 'C' is 'pr' as the matrix is of dimension
p x r Also each entry takes O (q) times to compute, thus the total time to compute all
possible entries for the matrix 'C' which is a multiplication of 'A' and 'B' is proportional
to the product of the dimension p q r.

It is also noticed that we can save the number of operations by reordering the
parenthesis.

Example1: Let us have 3 matrices, A1,A2,A3 of order (10 x 100), (100 x 5) and (5 x 50)
respectively.

Three Matrices can be multiplied in two ways:

1. A1,(A2,A3): First multiplying(A2 and A3) then multiplying and resultant withA1.
2. (A1,A2),A3: First multiplying(A1 and A2) then multiplying and resultant withA3.

No of Scalar multiplication in Case 1 will be:

1. (100 x 5 x 50) + (10 x 100 x 50) = 25000 + 50000 = 75000

No of Scalar multiplication in Case 2 will be:

1. (100 x 10 x 5) + (10 x 5 x 50) = 5000 + 2500 = 7500

To find the best possible way to calculate the product, we could simply parenthesis the
expression in every possible fashion and count each time how many scalar
multiplication are required.
Matrix Chain Multiplication Problem can be stated as "find the optimal
parenthesization of a chain of matrices to be multiplied such that the number of scalar
multiplication is minimized".

Number of ways for parenthesizing the matrices:

There are very large numbers of ways of parenthesizing these matrices. If there are n
items, there are (n-1) ways in which the outer most pair of parenthesis can place.

(A1) (A2,A3,A4,................An)
Or (A1,A2) (A3,A4 .................An)
Or (A1,A2,A3) (A4 ...............An)
........................

Or(A1,A2,A3.............An-1) (An)

It can be observed that after splitting the kth matrices, we are left with two
parenthesized sequence of matrices: one consist 'k' matrices and another consist 'n-k'
matrices.

Now there are 'L' ways of parenthesizing the left sublist and 'R' ways of parenthesizing
the right sublist then the Total will be L.R:

Also p (n) = c (n-1) where c (n) is the nth Catalon number

c (n) =

On applying Stirling's formula we have

c (n) = Ω

Which shows that 4n grows faster, as it is an exponential function, then n1.5.

Development of Dynamic Programming Algorithm


1. Characterize the structure of an optimal solution.
2. Define the value of an optimal solution recursively.
3. Compute the value of an optimal solution in a bottom-up fashion.
4. Construct the optimal solution from the computed information.

Dynamic Programming Approach


Let Ai,j be the result of multiplying matrices i through j. It can be seen that the
dimension of Ai,j is pi-1 x pj matrix.

Dynamic Programming solution involves breaking up the problems into subproblems


whose solution can be combined to solve the global problem.

At the greatest level of parenthesization, we multiply two matrices

A1.....n=A1....k x Ak+1....n)

Thus we are left with two questions:

o How to split the sequence of matrices?


o How to parenthesize the subsequence A1.....k andAk+1......n?

One possible answer to the first question for finding the best value of 'k' is to check all
possible choices of 'k' and consider the best among them. But that it can be observed
that checking all possibilities will lead to an exponential number of total possibilities.
It can also be noticed that there exists only O (n2 ) different sequence of matrices, in
this way do not reach the exponential growth.

Step1: Structure of an optimal parenthesization: Our first step in the dynamic


paradigm is to find the optimal substructure and then use it to construct an optimal
solution to the problem from an optimal solution to subproblems.

Let Ai....j where i≤ j denotes the matrix that results from evaluating the product

Ai Ai+1....Aj.

If i < j then any parenthesization of the product Ai Ai+1 ......Aj must split that the product
between Ak and Ak+1 for some integer k in the range i ≤ k ≤ j. That is for some value
of k, we first compute the matrices Ai.....k & Ak+1....j and then multiply them together to
produce the final product Ai....j. The cost of computing Ai....k plus the cost of computing
Ak+1....j plus the cost of multiplying them together is the cost of parenthesization.

Step 2: A Recursive Solution: Let m [i, j] be the minimum number of scalar


multiplication needed to compute the matrixAi....j.
If i=j the chain consist of just one matrix Ai....i=Ai so no scalar multiplication are
necessary to compute the product. Thus m [i, j] = 0 for i= 1, 2, 3....n.

If i<j we assume that to optimally parenthesize the product we split it between Ak and
Ak+1 where i≤ k ≤j. Then m [i,j] equals the minimum cost for computing the
subproducts Ai....k and Ak+1....j+ cost of multiplying them together. We know Ai has
dimension pi-1 x pi, so computing the product Ai....k and Ak+1....jtakes pi-1 pk pj scalar
multiplication, we obtain

m [i,j] = m [i, k] + m [k + 1, j] + pi-1 pk pj

There are only (j-1) possible values for 'k' namely k = i, i+1.....j-1. Since the optimal
parenthesization must use one of these values for 'k' we need only check them all to
find the best.

So the minimum cost of parenthesizing the product Ai Ai+1......Aj becomes

To construct an optimal solution, let us define s [i,j] to be the value of 'k' at which we
can split the product Ai Ai+1 .....Aj To obtain an optimal parenthesization i.e. s [i, j] = k
such that

m [i,j] = m [i, k] + m [k + 1, j] + pi-1 pk pj

Example of Matrix Chain Multiplication


Example: We are given the sequence {4, 10, 3, 12, 20, and 7}. The matrices have size 4
x 10, 10 x 3, 3 x 12, 12 x 20, 20 x 7. We need to compute M [i,j], 0 ≤ i, j≤ 5. We know M
[i, i] = 0 for all i.
Let us proceed with working away from the diagonal. We compute the optimal solution
for the product of 2 matrices.

Here P0 to P5 are Position and M1 to M5 are matrix of size (pi to pi-1)

On the basis of sequence, we make a formula

SQL CREATE TABLE

In Dynamic Programming, initialization of every method done by '0'.So we initialize it


by '0'.It will sort out diagonally.

We have to sort out all the combination but the minimum output combination is taken
into consideration.

Calculation of Product of 2 matrices:


1. m (1,2) = m1 x m2
= 4 x 10 x 10 x 3
= 4 x 10 x 3 = 120

2. m (2, 3) = m2 x m3
= 10 x 3 x 3 x 12
= 10 x 3 x 12 = 360

3. m (3, 4) = m3 x m4
= 3 x 12 x 12 x 20
= 3 x 12 x 20 = 720

4. m (4,5) = m4 x m5
= 12 x 20 x 20 x 7
= 12 x 20 x 7 = 1680

o We initialize the diagonal element with equal i,j value with '0'.
o After that second diagonal is sorted out and we get all the values corresponded
to it

Now the third diagonal will be solved out in the same way.

Now product of 3 matrices:

M [1, 3] = M1 M2 M3

1. There are two cases by which we can solve this multiplication: ( M 1 x M2) + M3,
M1+ (M2x M3)
2. After solving both cases we choose the case in which minimum output is there.

M [1, 3] =264

As Comparing both output 264 is minimum in both cases so we insert 264 in table
and ( M1 x M2) + M3 this combination is chosen for the output making.

M [2, 4] = M2 M3 M4
1. There are two cases by which we can solve this multiplication: (M 2x M3)+M4,
M2+(M3 x M4)
2. After solving both cases we choose the case in which minimum output is there.

M [2, 4] = 1320

As Comparing both output 1320 is minimum in both cases so we insert 1320 in table
and M2+(M3 x M4) this combination is chosen for the output making.

M [3, 5] = M3 M4 M5

1. There are two cases by which we can solve this multiplication: ( M 3 x M4) + M5,
M3+ ( M4xM5)
2. After solving both cases we choose the case in which minimum output is there.

M [3, 5] = 1140

As Comparing both output 1140 is minimum in both cases so we insert 1140 in table
and ( M3 x M4) + M5this combination is chosen for the output making.

Now Product of 4 matrices:

M [1, 4] = M1 M2 M 3 M4

There are three cases by which we can solve this multiplication:

1. ( M1 x M2 x M3) M4
2. M1 x(M2 x M3 x M4)
3. (M1 xM2) x ( M3 x M4)
After solving these cases we choose the case in which minimum output is there

M [1, 4] =1080

As comparing the output of different cases then '1080' is minimum output, so we


insert 1080 in the table and (M1 xM2) x (M3 x M4) combination is taken out in output
making,

M [2, 5] = M2 M3 M4 M5

There are three cases by which we can solve this multiplication:

1. (M2 x M3 x M4)x M5
2. M2 x( M3 x M4 x M5)
3. (M2 x M3)x ( M4 x M5)

After solving these cases we choose the case in which minimum output is there

M [2, 5] = 1350

As comparing the output of different cases then '1350' is minimum output, so we


insert 1350 in the table and M2 x( M3 x M4 xM5)combination is taken out in output
making.

Now Product of 5 matrices:

M [1, 5] = M1 M2 M 3 M4 M5
There are five cases by which we can solve this multiplication:

1. (M1 x M2 xM3 x M4 )x M5
2. M1 x( M2 xM3 x M4 xM5)
3. (M1 x M2 xM3)x M4 xM5
4. M1 x M2x(M3 x M4 xM5)

After solving these cases we choose the case in which minimum output is there

M [1, 5] = 1344

As comparing the output of different cases then '1344' is minimum output, so we


insert 1344 in the table and M1 x M2 x(M3 x M4 x M5)combination is taken out in output
making.

Final Output is:

Step 3: Computing Optimal Costs: let us assume that matrix Ai has dimension pi-1x
pi for i=1, 2, 3....n. The input is a sequence (p0,p1,......pn) where length [p] = n+1. The
procedure uses an auxiliary table m [1....n, 1.....n] for storing m [i, j] costs an auxiliary
table s [1.....n, 1.....n] that record which index of k achieved the optimal costs in
computing m [i, j].

The algorithm first computes m [i, j] ← 0 for i=1, 2, 3.....n, the minimum costs for the
chain of length 1.
Algorithm of Matrix Chain Multiplication
MATRIX-CHAIN-ORDER (p)

1. n length[p]-1
2. for i ← 1 to n
3. do m [i, i] ← 0
4. for l ← 2 to n // l is the chain length
5. do for i ← 1 to n-l + 1
6. do j ← i+ l -1
7. m[i,j] ← ∞
8. for k ← i to j-1
9. do q ← m [i, k] + m [k + 1, j] + pi-1 pk pj
10. If q < m [i,j]
11. then m [i,j] ← q
12. s [i,j] ← k
13. return m and s.

We will use table s to construct an optimal solution.

Step 1: Constructing an Optimal Solution:

PRINT-OPTIMAL-PARENS (s, i, j)
1. if i=j
2. then print "A"
3. else print "("
4. PRINT-OPTIMAL-PARENS (s, i, s [i, j])
5. PRINT-OPTIMAL-PARENS (s, s [i, j] + 1, j)
6. print ")"

Analysis: There are three nested loops. Each loop executes a maximum n times.

1. l, length, O (n) iterations.


2. i, start, O (n) iterations.
3. k, split point, O (n) iterations

Body of loop constant complexity

Keep Watching
Skip Ad

Total Complexity is: O (n3)

Algorithm with Explained Example


Question: P [7, 1, 5, 4, 2}

Solution: Here, P is the array of a dimension of matrices.


So here we will have 4 matrices:

A17x1 A21x5 A35x4 A44x2


i.e.
First Matrix A1 have dimension 7 x 1
Second Matrix A2 have dimension 1 x 5
Third Matrix A3 have dimension 5 x 4
Fourth Matrix A4 have dimension 4 x 2

Let say,
From P = {7, 1, 5, 4, 2} - (Given)
And P is the Position
p0 = 7, p1 =1, p2 = 5, p3 = 4, p4=2.
Length of array P = number of elements in P
∴length (p)= 5
From step 3
Follow the steps in Algorithm in Sequence
According to Step 1 of Algorithm Matrix-Chain-Order

Step 1:

n ← length [p]-1
Where n is the total number of elements
And length [p] = 5
∴ n = 5 - 1 = 4
n=4
Now we construct two tables m and s.
Table m has dimension [1.....n, 1.......n]
Table s has dimension [1.....n-1, 2.......n]
Now, according to step 2 of Algorithm

1. for i ← 1 to n
2. this means: for i ← 1 to 4 (because n =4)
3. for i=1
4. m [i, i]=0
5. m [1, 1]=0
6. Similarly for i = 2, 3, 4
7. m [2, 2] = m [3,3] = m [4,4] = 0
8. i.e. fill all the diagonal entries "0" in the table m
9. Now,
10. l ← 2 to n
11. l ← 2 to 4 (because n =4 )

Case 1:

1. When l - 2
for (i ← 1 to n - l + 1)
i ← 1 to 4 - 2 + 1
i ← 1 to 3

When i = 1
do j ← i + l - 1
j ← 1 + 2 - 1
j ← 2
i.e. j = 2
Now, m [i, j] ← ∞
i.e. m [1,2] ← ∞
Put ∞ in m [1, 2] table
for k ← i to j-1
k ← 1 to 2 - 1
k ← 1 to 1
k=1
Now q ← m [i, k] + m [k + 1, j] + pi-1 pk pj
for l = 2
i = 1
j =2
k = 1
q ← m [1,1] + m [2,2] + p0x p1x p2
and m [1,1] = 0
for i ← 1 to 4
∴ q ← 0 + 0 + 7 x 1 x 5
q ← 35
We have m [i, j] = m [1, 2] = ∞
Comparing q with m [1, 2]
q < m [i, j]
i.e. 35 < m [1, 2]
35 < ∞
True
then, m [1, 2 ] ← 35 (∴ m [i,j] ← q)
s [1, 2] ← k
and the value of k = 1
s [1,2 ] ← 1
Insert "1" at dimension s [1, 2] in table s. And 35 at m [1, 2]

2. l remains 2

L = 2
i ← 1 to n - l + 1
i ← 1 to 4 - 2 + 1
i ← 1 to 3
for i = 1 done before
Now value of i becomes 2
i=2
j ← i + l - 1
j ← 2 + 2 - 1
j ← 3
j = 3
m [i , j] ← ∞
i.e. m [2,3] ← ∞
Initially insert ∞ at m [2, 3]
Now, for k ← i to j - 1
k ← 2 to 3 - 1
k ← 2 to 2
i.e. k =2
Now, q ← m [i, k] + m [k + 1, j] + p i-1 pk pj
For l =2
i = 2
j = 3
k = 2
q ← m [2, 2] + m [3, 3] + p1x p2 x p3
q ← 0 + 0 + 1 x 5 x 4
q ← 20
Compare q with m [i ,j ]
If q < m [i,j]
i.e. 20 < m [2, 3]
20 < ∞
True
Then m [i,j ] ← q
m [2, 3 ] ← 20
and s [2, 3] ← k
and k = 2
s [2,3] ← 2

3. Now i become 3

i = 3
l = 2
j ← i + l - 1
j ← 3 + 2 - 1
j ← 4
j=4
Now, m [i, j ] ← ∞
m [3,4 ] ← ∞
Insert ∞ at m [3, 4]
for k ← i to j - 1
k ← 3 to 4 - 1
k ← 3 to 3
i.e. k = 3
Now, q ← m [i, k] + m [k + 1, j] + pi-1 pk pj
i = 3
l = 2
j = 4
k = 3
q ← m [3, 3] + m [4,4] + p2 x p3 x p 4
q ← 0 + 0 + 5 x 2 x 4
q 40
Compare q with m [i, j]
If q < m [i, j]
40 < m [3, 4]
40 < ∞
True
Then, m [i,j] ← q
m [3,4] ← 40
and s [3,4] ← k
s [3,4] ← 3

Case 2: l becomes 3

L = 3
for i = 1 to n - l + 1
i = 1 to 4 - 3 + 1
i = 1 to 2
When i = 1
j ← i + l - 1
j ← 1 + 3 - 1
j ← 3
j=3
Now, m [i,j] ← ∞
m [1, 3] ← ∞
for k ← i to j - 1
k ← 1 to 3 - 1
k ← 1 to 2

Now we compare the value for both k=1 and k = 2. The minimum of two will be placed
in m [i,j] or s [i,j] respectively.
Now from above

Value of q become minimum for k=1


∴ m [i,j] ← q
m [1,3] ← 48
Also m [i,j] > q
i.e. 48 < ∞
∴ m [i , j] ← q
m [i, j] ← 48
and s [i,j] ← k
i.e. m [1,3] ← 48
s [1, 3] ← 1
Now i become 2
i = 2
l = 3
then j ← i + l -1
j ← 2 + 3 - 1
j ← 4
j=4
so m [i,j] ← ∞
m [2,4] ← ∞
Insert initially ∞ at m [2, 4]
for k ← i to j-1
k ← 2 to 4 - 1
k ← 2 to 3

Here, also find the minimum value of m [i,j] for two values of k = 2 and k =3
1. But 28 < ∞
2. So m [i,j] ← q
3. And q ← 28
4. m [2, 4] ← 28
5. and s [2, 4] ← 3
6. e. It means in s table at s [2,4] insert 3 and at m [2,4] insert 28.

Case 3: l becomes 4

L = 4
For i ← 1 to n-l + 1
i ← 1 to 4 - 4 + 1
i ← 1
i=1
do j ← i + l - 1
j ← 1 + 4 - 1
j ← 4
j=4
Now m [i,j] ← ∞
m [1,4] ← ∞
for k ← i to j -1
k ← 1 to 4 - 1
k ← 1 to 3
When k = 1
q ← m [i, k] + m [k + 1, j] + pi-1 pk pj
q ← m [1,1] + m [2,4] + p0xp4x p1
q ← 0 + 28 + 7 x 2 x 1
q ← 42
Compare q and m [i, j]
m [i,j] was ∞
i.e. m [1,4]
if q < m [1,4]
42< ∞
True
Then m [i,j] ← q
m [1,4] ← 42
and s [1,4] 1 ? k =1
When k = 2
L = 4, i=1, j = 4
q ← m [i, k] + m [k + 1, j] + pi-1 pk pj
q ← m [1, 2] + m [3,4] + p0 xp2 xp4
q ← 35 + 40 + 7 x 5 x 2
q ← 145
Compare q and m [i,j]
Now m [i, j]
i.e. m [1,4] contains 42.
So if q < m [1, 4]
But 145 less than or not equal to m [1, 4]
So 145 less than or not equal to 42.
So no change occurs.
When k = 3
l = 4
i = 1
j = 4
q ← m [i, k] + m [k + 1, j] + pi-1 pk pj
q ← m [1, 3] + m [4,4] + p0 xp3 x p4
q ← 48 + 0 + 7 x 4 x 2
q ← 114
Again q less than or not equal to m [i, j]
i.e. 114 less than or not equal to m [1, 4]
114 less than or not equal to 42

So no change occurs. So the value of m [1, 4] remains 42. And value of s [1, 4] = 1

Now we will make use of only s table to get an optimal solution.


Floyd-Warshall Algorithm
Let the vertices of G be V = {1, 2........n} and consider a subset {1, 2........k} of vertices for
some k. For any pair of vertices i, j ∈ V, considered all paths from i to j whose
intermediate vertices are all drawn from {1, 2.......k}, and let p be a minimum weight
path from amongst them. The Floyd-Warshall algorithm exploits a link between path
p and shortest paths from i to j with all intermediate vertices in the set {1, 2.......k-1}.
The link depends on whether or not k is an intermediate vertex of path p.

If k is not an intermediate vertex of path p, then all intermediate vertices of path p are
in the set {1, 2........k-1}. Thus, the shortest path from vertex i to vertex j with all
intermediate vertices in the set {1, 2.......k-1} is also the shortest path i to j with all
intermediate vertices in the set {1, 2.......k}.

If k is an intermediate vertex of path p, then we break p down into i → k → j.

Let dij(k) be the weight of the shortest path from vertex i to vertex j with all intermediate
vertices in the set {1, 2.......k}.

Competitive questions on Structures

A recursive definition is given by

FLOYD - WARSHALL (W)


1. n ← rows [W].
2. D0 ← W
3. for k ← 1 to n
4. do for i ← 1 to n
5. do for j ← 1 to n
6. do dij(k) ← min (dij(k-1),dik(k-1)+dkj(k-1) )
7. return D(n)

The strategy adopted by the Floyd-Warshall algorithm is Dynamic Programming. The


running time of the Floyd-Warshall algorithm is determined by the triply nested for
loops of lines 3-6. Each execution of line 6 takes O (1) time. The algorithm thus runs in
time θ(n3 ).

Example: Apply Floyd-Warshall algorithm for constructing the shortest path. Show
that matrices D(k) and π(k) computed by the Floyd-Warshall algorithm for the graph.
Solution:

Step (i) When k = 0

Step (ii) When k =1


Step (iii) When k = 2
Step (iv) When k = 3
Step (v) When k = 4
Step (vi) When k = 5
TRANSITIVE- CLOSURE (G)
1. n ← |V[G]|
2. for i ← 1 to n
3. do for j ← 1 to n
4. do if i = j or (i, j) ∈ E [G]

5. the ← 1

6. else ← 0
7. for k ← 1 to n
8. do for i ← 1 to n
9. do for j ← 1 to n

10. dod ij
(k) ←
11. Return T(n).

You might also like