ADA Lab Manual - MIT
ADA Lab Manual - MIT
Lab Manual of
Analysis and Desing of Algorithms Lab
(BCSL404)
Program Outcomes
Subject: Analysis and Design of Algorithms Lab (ADAL) Subject Code: BCSL404
Analysis and Design of Algorithms is a fundamental aspect of computer science
that involves creating efficient solutions to computational problems and evaluating
their performance. ADA focuses on designing algorithms that effectively address
specific challenges and analyzing their efficiency in terms of time and space
complexity. All the algorithms have to be implemented either writing C programs or
writing C++ programs.
Course Objectives
➢ To design and implement various algorithms in C/C++ programming using
suitable development tools to address different computational challenges.
➢ To apply diverse design strategies for effective problem-solving.
➢ To Measure and compare the performance of different algorithms to determine
their efficiency and suitability for specific tasks.
Course Outcomes
COs Description
Analyze and Apply various algorithms design strategies to solve
C244.1 computational problems.
Develop a C/C++ program to implement various algorithms to solve
C241.2 problems using Modern tool and document the same with appropriate
oral justification.
Demonstrate and Evaluate the runtime performance of different
C241.3 algorithm design approaches for the given data.
Maharaja Institute of Technology Mysore
Belawadi, Srirangapatna Tq, Mandya-571477
Department of Computer Science and Engineering
DO’S
➢ Sign the log book when you enter/leave the laboratory.
➢ Read the handout/procedure before starting the experiment. If you do not
understand the procedure, clarify with the concerned staff.
➢ Report any problem in system (if any) to the person in-charge.
➢ After the lab session, shut down the computers.
➢ All students in the laboratory should follow the directions given by staff/lab
technical staff.
DON’TS
➢ Do not insert metal objects such as pins, needle or clips into the computer
casing. They may cause fire.
➢ Do not open any irrelevant websites in labs.
➢ Do not use flash drive on laboratory computers without the consent of lab
instructor.
➢ Do not upload, delete or alter any software/ system files on laboratory
computers.
➢ Students are not allowed to work in laboratory alone or without presence of
the teaching staff/ instructor.
➢ Do not change the system settings and keyboard keys.
➢ Do not damage any hardware.
MAHARAJA INSTITUTE OF TECHNOLOGY MYSORE
BELAWADI, SRIRANGAPATNA Tq, MANDYA-571477
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING
2023-24 (Even) – 4th Semester
Description:
A spanning tree of a connected graph is its connected acyclic subgraph (i.e., a
tree) that contains all the vertices of the graph.
A minimum spanning tree of a weighted connected graph is its spanning tree
of the smallest weight, where the weight of a tree is defined as the sum of the
weights on all its edges.
The minimum spanning tree problem is the problem of finding a minimum
spanning tree for a given weighted connected graph.
Example: Figure below shows the complete graph on four nodes together with
three of its spanning trees
Algorithm:
Steps
1. Sort all the edges in non-decreasing order of their weight.
2. Pick the smallest edge. Check if it forms a cycle with the spanning tree
formed so far. If the cycle is not formed, include this edge. Else, discard it.
3. Repeat step#2 until there are (V-1) edges in the spanning tree.
Program:
/* Program to implement Kruskal’s Algorithm */
#include<stdio.h>
#include<stdlib.h>
int i,j,k,a,b,u,v,n,ne=1;
int min,mincost=0,cost[9][9],parent[9];
int find(int);
int uni(int,int);
void main()
{
printf("\n\t Implementation of Kruskal's algorithm\n");
printf("\nEnter the no. of vertices:");
scanf("%d",&n);
Input Graph:
Output 1:
Minimum cost = 99
Performance Analysis
The Kruskal’s method has an O(E log E) or O(V log V) time complexity, where E is
the number of edges and V is the number of vertices.
The algorithm starts with an empty spanning tree. The idea is to maintain
two sets of vertices. The first set contains the vertices already included in the
MST, and the other set contains the vertices not yet included. At every step, it
considers all the edges that connect the two sets and picks the minimum
weight edge from these edges. After picking the edge, it moves the other endpoint
of the edge to the set containing Minimum Cost Spanning Tree.
Algorithm:
Step 1: Determine an arbitrary vertex as the starting vertex of the Minimum
Cost Spanning Tree.
Step 2: Follow steps 3 to 5 till there are vertices that are not included in the
Minimum Cost Spanning Tree (known as fringe vertex).
Step 3: Find edges connecting any tree vertex with the fringe vertices.
Step 4: Find the minimum among these edges.
Step 5: Add the chosen edge to the Minimum Cost Spanning Tree if it does not
form any cycle.
Step 6: Return the Minimum Cost Spanning Tree and exit.
Example:
Input Graph:
Program:
#include<stdio.h>
int n,cost[10][10],temp,nears[10];
void readv();
void primsalg();
void readv()
{
int i,j;
printf("\n Enter the No of nodes or vertices:");
scanf("%d",&n);
printf("\n Enter the Cost Adjacency matrix of the given graph: \n");
j = findnextindex(cost,nears);
t[i][1]=j;
t[i][2]=nears[j];
printf("\n(%d,%d)-->%d",t[i][1],t[i][2],cost[j][nears[j]]);
mincost=mincost+cost[j][nears[j]];
nears[j]=0;
for(k=1;k<=n;k++)
{
if(nears[k]!=0 && cost[k][nears[k]]>cost[k][j])
{
nears[k]=j;
}
}
}
printf("\n The Required Mincost of the Spanning Tree is:%d",mincost);
}
int findnextindex(int cost[10][10],int nears[10])
{
int min=999,a,k,p;
for(a=1;a<=n;a++)
{
p=nears[a];
if(p!=0)
{
if(cost[a][p]<min)
{
min=cost[a][p];
k=a;
}
}
}
return k;
}
void main()
{
readv();
primsalg();
}
Input Graph:
Output:
Enter the No of nodes or vertices:4
Algorithm:
ALGORITHM Floyd(W[1..n, 1..n])
//Implements Floyd's algorithm for the all-pairs shortest -paths problem
//Input: The weight matrix W of a graph with no negative-length cycle.
//Output: The distance matrix of the shortest paths' lengths.
D←W // is not necessary if W can be overwritten
for k ← 1 to n do
for i ← 1 to n do
for j ← 1 to n do
D[i, j] ← min{ D[i, j], D[i, k] + D[k, j]}
return D
Example:
Input Graph
Program:
/* Program to find all pair shortest path. */
#include<stdio.h>
void readf();
void amin();
int cost[20][20],a[20][20];
int i,j,k,n;
void readf()
{
printf("\n Enter the number of vertices :");
scanf("%d",&n);
printf("\n Enter the weighted matrix - 999 for infinity:");
for(i=0;i<n;i++)
{
for(j=0;j<n;j++)
{
scanf("%d",&cost[i][j]);
if(cost[i][j]==0 && (i!=j))
cost[i][j]=999;
a[i][j]=cost[i][j];
}
}
}
void amin()
{
for(k=0;k<n;k++)
{
for(i=0;i<n;i++)
{
for(j=0;j<n;j++)
{
if(a[i][j]>a[i][k]+a[k][j])
{
a[i][j]=a[i][k]+a[k][j];
}
}
}
}
printf("\n The All pair shortest path is:");
for(i=0;i<n;i++)
{
printf("\n");
for(j=0;j<n;j++)
{
printf("%d\t",a[i][j]);
}
}
}
void main()
{
readf();
amin();
}
INPUT GRAPH:
OUTPUT:
Enter the number of vertices:
4
Enter the weighted matrix - 999 for infinity :
0 999 3 999
2 0 999 999
999 7 0 1
6 999 999 0
Time Complexity: O(V3), where V is the number of vertices in the graph and
we run three nested loops each of size V
Program 3.b: Design and implement C/C++ Program to find the transitive
closure using Warshal's algorithm.
Description:
Given a directed graph, determine if a vertex j is reachable from another
vertex i for all vertex pairs (i, j) in the given graph. Here reachable means that there
is a path from vertex i to j. The reach-ability matrix is called the transitive closure
of a graph.
The graph is given in the form of adjacency matrix say ‘graph[V][V]’ where
graph[i][j] is 1 if there is an edge from vertex i to vertex j or i is equal to j,
otherwise graph[i][j] is 0.
Floyd Warshall Algorithm can be used, we can calculate the distance matrix
dist[V][V] using Floyd Warshall, if dist[i][j] is infinite, then j is not reachable from i.
Otherwise, j is reachable and the value of dist[i][j] will be less than V.
Example:
Program:
/* Program to find the transitive closure using Warshal's algorithm.*/
#include<stdio.h>
#include<math.h>
void warshal(int p[10][10], int n)
{
int i, j, k;
for (k = 1; k <= n; k++)
for (i = 1; i <= n; i++)
for (j = 1; j <= n; j++)
p[i][j] = p[i][j] || (p[i][k] && p[k][j]);
}
void main()
{
int p[10][10] = { 0 }, n, e, u, v, i, j;
printf("\n Enter the number of vertices:");
scanf("%d", &n);
printf("\n Enter the number of edges:");
scanf("%d", &e);
printf("Enter the edges: (u,v)\n");
for (i = 1; i <= e; i++)
{
scanf("%d%d", &u,&v);
p[u][v] = 1;
}
printf("\n Matrix of input data: \n");
for (i = 1; i <= n; i++)
{
for (j = 1; j <= n; j++)
printf("%d\t", p[i][j]);
printf("\n");
}
warshal(p, n);
printf("\n Transitive closure: \n");
for (i = 1; i <= n; i++)
Transitive closure:
1 1 1 1
1 1 1 1
0 0 0 0
1 1 1 1
Input Graph:
Performance:
Time Complexity: O(V3), where V is the number of vertices in the graph and
we run three nested loops each of size V.
Program 4: Design and implement C/C++ Program to find shortest paths from
a given vertex in a weighted connected graph to other vertices using
Dijkstra's algorithm.
Description:
Dijkstra's algorithm is often considered to be the most straightforward
algorithm for solving the shortest path problem.
Dijkstra's algorithm is used for solving single-source shortest path
problems for directed or undirected paths. Single-source means that one vertex
is chosen to be the start, and the algorithm will find the shortest path from
that vertex to all other vertices.
Dijkstra's algorithm does not work for graphs with negative edges. For
graphs with negative edges, the Bellman-Ford algorithm that is described on the
next page, can be used instead.
To find the shortest path, Dijkstra's algorithm needs to know which vertex is
the source, it needs a way to mark vertices as visited.
Output Graph:
Program:
/* Implementation of Dijkstra's Algorithm in C */
#include <stdio.h>
#define INF 9999
#define MAX 10
distance[start] = 0;
visited_nodes[start] = 1;
counter = 1;
visited_nodes[next_node] = 1;
for (i = 0; i < size; i++)
if (!visited_nodes[i])
if (minimum_distance + cost[next_node][i] < distance[i])
{
distance[i] = minimum_distance + cost[next_node][i];
previous[i] = next_node;
}
counter++;
}
for (i = 0; i < size; i++)
if (i != start)
{
printf("\nDistance from the Source Node to %d: %d", i, distance[i]);
void main()
{
int Graph[MAX][MAX], i, j, n, source;
printf("Enter the number of nodes:\n");
scanf("%d",&n);
printf("Enter the cost adjacency Matrix:\n");
for(i=0;i<n;i++)
{
for(j=0;j<n;j++)
{
scanf("%d",&Graph[i][j]);
}
}
source = 0;
DijkstraAlgorithm(Graph, n, source);
}
Output:
Enter the number of nodes:
5
Enter the cost adjacency Matrix:
03070
30420
04056
72504
00640
Performance Analysis:
The time complexity of Dijkstra’s Algorithm is typically O(V 2) when using a simple
array implementaion or O((V + E) log V) with a priority queue, where V represents the
number of vertices and E represents the number of edges in the graph.
if (top == -1)
break;
out[k] = stack[top--];
for (i=0;i<n;i++)
{
if (a[out[k]][i] == 1)
in[i]--;
}
k++;
}
Program:
/* Program to solve 0/1 Knapsack problem using Dynamic Programming method.
*/
#include<stdio.h>
int max(int a, int b)
{
if(a>b)
return a;
else
return b;
}
int knapsack(int w[], int p[], int n, int M)
{
if(M==0)
return 0;
if(n==0)
return 0;
if(w[n-1]>M)
return knapsack(w,p,n-1,M);
return max(knapsack(w,p,n-1,M),p[n-1]+knapsack(w,p,n-1,M-w[n-1]));
}
void main()
{
int i,n;
int M; //capacity of knapsack
int w[10]; //weight of items
int p[10]; //value of items
printf("Enter the no. of items:\n");
scanf("%d",&n);
printf("Enter the weight and price of all items:\n");
for(i=0;i<n;i++)
{
scanf("%d%d",&w[i],&p[i]);
}
printf("Enter the capacity of knapsack:\n");
scanf("%d",&M);
Performance analysis:
Time Complexity: O(2N)
Example:
Input: arr[] = {{60, 10}, {100, 20}, {120, 30}}, W = 50
Output: 240
Explanation: By taking items of weight 10 and 20 kg and 2/3 fraction of 30 kg.
Hence total price will be 60 +100 + (2/3)(120) = 240
Output:
Enter the no. of items:
5
Enter the weight and price of all items:
10 3
15 3
10 2
12 5
81
Enter the capacity of knapsack:
10
Added object 5 (8, 1) completely in the bag. Space left: 9.
Added object 2 (15, 3) completely in the bag. Space left: 6.
Added object 3 (10, 2) completely in the bag. Space left: 4.
Added object 1 (10, 3) completely in the bag. Space left: 1.
Added 19% (12, 5) of object 4 in the bag.
Filled the bag with objects worth 45.40.
Performance Analysis:
Time Complexity: O(N * W) where N is items and W is capacities.
int i;
printf("Enter the number of elements in set\n");
scanf("%d",&n);
printf("Enter the set values\n");
for(i=0;i<n;++i)
scanf("%d",&s[i]);
printf("Enter the sum\n");
scanf("%d",&d);
printf("The program output is\n");
subset(0,0);
if(flag==0)
printf("There is no solution");
}
int subset(int sum,int i)
{
if(sum==d)
{
flag=1;
display(count);
return (0);
}
if(sum>d||i>=n)
return 0;
else
{
set[count]=s[i];
count++;
subset(sum+s[i],i+1);
count--;
subset(sum,i+1);
}
}
void display(int count)
{
int i;
printf("{");
for(i=0;i<count;i++)
printf("%d ",set[i]);
printf("}");
}
Output 1:
Enter the number of elements in set
5
Enter the set values
12568
Enter the sum
9
The program output is
{1 2 6}{1 8}
Output 2:
Enter the number of elements in set
5
Enter the set values
12568
Enter the sum
4
The program output is
There is no solution
Output 3:
Enter the number of elements in set
5
Enter the set values
12568
Enter the sum
7
The program output is
{1 6}{2 5}
Performance Analysis:
Time Complexity: O(2n) The above solution may try all subsets of the given set in
worst case. Therefore time complexity of the above solution is exponential. The
problem is in-fact NP-Complete (There is no known polynomial time solution for
this problem).
Description:
Selection sort is a simple and efficient sorting algorithm that works by
repeatedly selecting the smallest (or largest) element from the unsorted portion of
the list and moving it to the sorted portion of the list.
Program:
#include <stdio.h>
#include <stdlib.h>
#include<time.h>
void swap(long int*a,long int*b)
{
int tmp=*a;
*a=*b;
*b=tmp;
}
void selectionsort (long int arr[],long int n)
{
long int i,j,midx;
for(i=0;i<n-1;i++)
{
midx=i;
for(j=i+1;j<n;j++)
if(arr[j]<arr[midx])
midx=j;
swap(&arr[midx],&arr[i]);
}
}
void main()
{
long int n=1000;
int it=0;
double tim1[10];
printf("Input Size, Selection Sorting time \n");
while(it++<5)
{
long int a[n];
for(int i=0;i<n;i++)
{
long int no=rand()%n+1;
a[i]=no;
}
//using clock t to store time
clock_t start,end;
start=clock();
selectionsort(a,n);
end=clock();
tim1[it]=(double)(end-start)/1000;
printf(" %ld = %ld ms\n",n,(long int)tim1[it]);
Output:
Performance Analysis:
Best-case: O(n2), best case occurs when the array is already sorted. (where n is the
number of integers in an array)
Average-case: O(n2), the average case arises when the elements of the array are in
a disordered or random order, without a clear ascending or descending pattern.
Worst-case: O(n2), The worst-case scenario arises when we need to sort an array in
ascending order, but the array is initially in descending order.
Program 10: Design and implement C/C++ Program to sort a given set of n
integer elements using Quick Sort method and compute its time complexity.
Run the program for varied values of n> 5000 and record the time taken to
sort. Plot a graph of the time taken versus n. The elements can be read from
a file or can be generated using the random number generator.
Description:
QuickSort is a sorting algorithm based on the Divide and Conquer algorithm that
picks an element as a pivot and partitions the given array around the picked pivot
by placing the pivot in its correct position in the sorted array.
Partition is done recursively on each side of the pivot after the pivot is placed
in its correct position and this finally sorts the array.
Program:
/* Program to arrange the elements in increasing order */
#include <stdio.h>
#include <stdlib.h>
#include<time.h>
for(i=0;i<n;i++)
{
no=rand()%n+1;
a[i]=no;
}
start=clock();
qs(a,0,n-1);
end=clock();
tm = (end - start);
printf(" %d = %lf\n Nano Seconds",n,tm);
}
Output:
Enter the number of elements:
1000
1000 = 128.000000
Nano Seconds
Time Complexity:
Program 11: Design and implement C/C++ Program to sort a given set of n
integer elements using Merge Sort method and compute its time complexity.
Run the program for varied values of n> 5000, and record the time taken to
sort. Plot a graph of the time taken versus n. The elements can be read from
a file or can be generated using the random number generator.
Description:
Like QuickSort, Merge Sort is a Divide and Conquer algorithm. It divides the
input array into two halves, calls itself for the two halves, and then it merges the
two sorted halves. The merge() function is used for merging two halves. The
merge(arr, l, m, r) is a key process that assumes that arr[l..m] and arr[m+1..r] are
sorted and merges the two sorted sub-arrays into one.
Algorithm:
Step 1: Start
Step 2: Declare an array and left, right, mid variable
Step 3: Perform merge function.
mergesort(array,left,right)
mergesort (array, left, right)
if left > right
return
mid= (left+right)/2
mergesort(array, left, mid)
mergesort(array, mid+1, right)
merge(array, left, mid, right)
Step 4: Stop
Program:
/* Program to implement Merge Sort */
#include<stdio.h>
#include<time.h>
#include <stdlib.h>
#define max 5000
int array[max];
void merge(int low, int mid, int high)
{
int temp[max];
int i = low;
int j = mid +1;
int k = low ;
while((i <= mid) && (j <=high))
{
if(array[i] <= array[j])
temp[k++] = array[i++] ;
else
temp[k++] = array[j++] ;
}
while( i <= mid )
temp[k++]=array[i++];
while( j <= high )
temp[k++]=array[j++];
for(i= low; i <= high ; i++)
array[i]=temp[i];
Output:
Enter the number of elements : 20
Unsorted list is :
4 7 18 16 14 16 7 13 10 2 3 8 11 20 4 7 1 7 13 17
Sorted list is :
1 2 3 4 4 7 7 7 7 8 10 11 13 13 14 16 16 17 18 20
20 = 26.000000 Nano Seconds
Performance Analysis:
Time Complexity: O(nlog(n))
Sorting arrays on different machines. Merge Sort is a recursive algorithm and time
complexity can be expressed as following recurrence relation.
T(n) = 2T(n/2) + θ(n)
Program 12: Design and implement C/C++ Program for N Queen's problem
using Backtracking.
Description:
The N Queen is the problem of placing N chess queens on an N×N chessboard so
that no two queens attack each other.
For example, the following is a solution for the 4 Queen problem.
Program:
/* Program for N Queen's problem using Backtracking */
#include<stdio.h>
#include<math.h>
int board[20],count;
void main()
{
int n,i,j;
void queen(int row,int n);
printf(" - N Queens Problem Using Backtracking -");
printf("\n Enter number of Queens:");
scanf("%d",&n);
queen(1,n);
}
for(i=1;i<=n;++i)
printf("\t%d",i);
for(i=1;i<=n;++i)
{
printf("\n\n%d",i);
for(j=1;j<=n;++j)
{
if(board[i]==j)
printf("\tQ");
else
printf("\t*");
}
}
}
int place(int row,int column)
{
int i;
for(i=1;i<=row-1;++i)
{
if(board[i]==column)
return 0;
else
if(abs(board[i]-column)==abs(i-row))
return 0;
}
return 1;
}
void queen(int row,int n)
{
int column;
for(column=1;column<=n;++column)
{
if(place(row,column))
{
board[row]=column;
if(row==n)
print(n);
else
queen(row+1,n);
}
}
}
Output:
Enter number of Queens: 5
Solution 1:
1 2 3 4 5
1 Q * * * *
2 * * Q * *
3 * * * * Q
4 * Q * * *
5 * * * Q *
Solution 2:
1 2 3 4 5
1 Q * * * *
2 * * * Q *
3 * Q * * *
4 * * * * Q
5 * * Q * *
Solution 3:
1 2 3 4 5
1 * Q * * *
2 * * * Q *
3 Q * * * *
4 * * Q * *
5 * * * * Q
Solution 4:
1 2 3 4 5
1 * Q * * *
2 * * * * Q
3 * * Q * *
4 Q * * * *
5 * * * Q *
Performance Analysis:
Time Complexity: O(N!)
Viva Questions
1. What is an algorithm?
Answer: An algorithm is a step-by-step procedure for solving a problem or accomplishing a
task.
10. What is the difference between the best-case and worst-case time complexity of an
algorithm?
Answer: The best-case time complexity is the minimum amount of time an algorithm can take
to run, while the worst-case time complexity is the maximum amount of time an algorithm can
take to run.
11. What is the difference between the average-case and worst-case time complexity of
an algorithm?
Answer: The worst-case time complexity is the maximum amount of time an algorithm can
take to run, while the average-case time complexity is the expected amount of time an
algorithm will take to run.
12. What is the difference between time complexity and space complexity?
Answer: Time complexity measures the amount of time an algorithm takes to run, while space
complexity measures the amount of memory an algorithm requires.
17. What is the difference between stable and unstable sorting algorithms?
Answer: Stable sorting algorithms preserve the relative order of equal elements in the input,
while unstable sorting algorithms may not.
19. What is the difference between in-place and out-of-place sorting algorithms?
Answer: In-place sorting algorithms sort the input array in place without using additional
memory, while out-of-place sorting algorithms require additional memory to store the sorted
output.
21. What is the difference between breadth-first search and depth-first search
algorithms?
Answer: Breadth-first search explores the nodes in the graph in a breadth-first order, while
depth-first search explores the nodes in a depth-first order.
22. What is the difference between a graph and a tree data structure?
Answer: A tree is a special case of a graph, where there are no cycles, and every pair of nodes
is connected by a unique path.
24. What is the difference between a directed graph and an undirected graph?
Answer: A directed graph has directed edges, where each edge points from one vertex to
another, while an undirected graph has undirected edges, where each edge connects two
vertices without any direction.
25. What is the difference between a complete graph and a sparse graph?
Answer: A complete graph has all possible edges between every pair of vertices, while a sparse
graph has relatively fewer edges.
26. What is the difference between a greedy algorithm and a dynamic programming
algorithm?
Answer: A greedy algorithm makes locally optimal choices at each step, while a dynamic
programming algorithm solves subproblems and reuses their solutions to solve the main
problem.
33. What is the difference between top-down and bottom-up dynamic programming?
Answer: Top-down dynamic programming involves solving a problem by breaking it down into
subproblems, while bottom-up dynamic programming involves solving the subproblems first
and combining them to solve the original problem.
36. What is the complexity of the brute-force solution to the traveling salesman
problem?
Answer: The brute-force solution to the traveling salesman problem has a time complexity of
O(n!), where n is the number of cities.
47. What is the time complexity of Kruskal’s algorithm and Prim’s algorithm?
Answer: The time complexity of Kruskal’s algorithm and Prim’s algorithm is O(E log E), where
E is the number of edges in the graph.
50. What is the difference between breadth-first search and topological sort?
Answer: Breadth-first search is a graph traversal algorithm that visits all the vertices in the
graph at a given distance (level) from a starting vertex before moving on to vertices at a greater
distance. Topological sort, on the other hand, is a way of ordering the vertices of a directed
acyclic graph such that for every directed edge u->v, u comes before v in the ordering.
vertex with the shortest path and updating the shortest paths to its neighbors.
55. What is the difference between dynamic programming and greedy algorithms?
Answer: Both dynamic programming and greedy algorithms are techniques for solving
optimization problems, but they differ in their approach. Dynamic programming involves
breaking a problem down into smaller subproblems and solving each subproblem only once,
storing the solution in a table to avoid re-computation. Greedy algorithms, on the other hand,
make the locally optimal choice at each step, without considering the global optimal solution.
Dynamic programming is generally more computationally expensive than greedy algorithms,
but can handle more complex optimization problems.
sorted. Bubble sort works by repeatedly swapping adjacent elements that are out of order, so it
needs to make O(n^2) comparisons and swaps in the worst case.
65. What is the difference between the Dynamic programming and Greedy method?
Answer:
Characteristic Dynamic Programming Greedy Method
Bottom-up approach (start from Top-down approach (start from the
Approach subproblems and build up to solve main problem and make locally
the main problem) optimal choices)
Subproblems are solved only once Subproblems are not revisited, and
Subproblem
and their solutions are stored in a the locally optimal choice is made at
reuse
table each step
Can have a higher time complexity Can have a lower time complexity
Time complexity
than Greedy Method than Dynamic Programming
Suitable for problems that exhibit Suitable for problems where making
Suitable
optimal substructure and overlapping locally optimal choices leads to a
problems
subproblems global optimal solution
70. What is the difference between Time Efficiency and Space Efficiency?
Answer: Time Efficiency refers to the measure of the number of times the critical algorithm
functions are executed, while Space Efficiency calculates the number of additional memory
units utilized by the algorithm.
71. Can you provide an overview of how Merge sort works, and can you give an example
of its implementation?
Answer: Merge sort is a sorting algorithm that involves dividing the original list into two
smaller sub-lists until only one item is left in each sub-list. These sub-lists are then sorted,
and the sorted sub-lists are merged to form a sorted parent list. This process is repeated
recursively until the original list is completely sorted.
For example, suppose we have an unsorted list of numbers: [5, 2, 8, 4, 7, 1, 3, 6]. The Merge
sort algorithm will first divide the list into two sub-lists: [5, 2, 8, 4] and [7, 1, 3, 6]. Each sub-
list will then be recursively divided until only one item is left in each sub-list: [5], [2], [8], [4],
[7], [1], [3], [6]. These single-item sub-lists are then sorted and merged pairwise to form new
sub-lists: [2, 5], [4, 8], [1, 7], [3, 6]. The process continues recursively until the final sorted list
is obtained: [1, 2, 3, 4, 5, 6, 7, 8].