Ques 1.
Determine ways to find the complexity of an algorithm? What is the relation between the
time and space complexities of an algorithm? Justify your answer with an example.
Answer:
Theta Notation (Θ-notation) - average case. ...
Omega Notation (Ω-notation) - best case. ...
Big-O Notation (O-notation) - worst case. ...
Constant O(1) ...
Logarithmic O(logn) ...
Linear O(n) ...
Linearithmic O(nlogn) ...
Quadratic O(n^2)
The time and space complexities are not related to each other. They are used to describe
how much space/time your algorithm takes based on the input.
For example when the algorithm has space complexity of:
O(1) - constant - the algorithm uses a fixed (small) amount of space which doesn't depend
on the input. For every size of the input the algorithm will take the same (constant) amount
of space. This is the case in your example as the input is not taken into account and what
matters is the time/space of the print command.
O(n), O(n^2), O(log(n))... - these indicate that you create additional objects based on the
length of your input. For example creating a copy of each object of v storing it in an array
and printing it after that takes O(n) space as you create n additional objects.
In contrast the time complexity describes how much time your algorithm consumes based
on the length of the input. Again:
O(1) - no matter how big is the input it always takes a constant time - for example only one
instruction. Like
function(list l) {
print("i got a list");
O(n), O(n^2), O(log(n)) - again it's based on the length of the input. For example
function(list l) {
for (node in l) {
print(node);
}
}
Ques 1.
Write a C/C++ program for linked list implementation of the list.
Answer:
#include <iostream>
using namespace std;
class Node
{
public:
int data;
Node *next;
// Default constructor
Node()
{
data = 0;
next = NULL;
}
// Parameterised Constructor
Node(int data)
{
this->data = data;
this->next = NULL;
}
};
// Linked list class to implement a linked list.
class Linkedlist
{
Node *head;
public:
// Default constructor
Linkedlist() { head = NULL; }
// Function to insert a node at the end of the linked list.
void insertNode(int);
// Function to print the linked list.
void printList();
// Function to delete the node at given position
void deleteNode(int);
};
// Function to delete the node at given position
void Linkedlist::deleteNode(int nodeOffset)
{
Node *temp1 = head, *temp2 = NULL;
int ListLen = 0;
if (head == NULL)
{
cout << "List empty." << endl;
return;
}
// Find length of the linked-list.
while (temp1 != NULL)
{
temp1 = temp1->next;
ListLen++;
}
// Check if the position to be deleted is less than the length of the link
ed list.
if (ListLen < nodeOffset)
{
cout << "Index out of range" << endl;
return;
}
// Declare temp1
temp1 = head;
// Deleting the head.
if (nodeOffset == 1)
{
// Update head
head = head->next;
delete temp1;
return;
}
// Traverse the list to find the node to be deleted.
while (nodeOffset-- > 1)
{
temp2 = temp1; // Update temp2
temp1 = temp1->next; // Update temp1
}
// Change the next pointer of the previous node.
temp2->next = temp1->next;
delete temp1; // Delete the node
}
// Function to insert a new node.
void Linkedlist::insertNode(int data)
{
// Create the new Node.
Node *newNode = new Node(data);
// Assign to head
if (head == NULL)
{
head = newNode;
return;
}
// Traverse till end of list
Node *temp = head;
while (temp->next != NULL)
{
temp = temp->next; // Update temp
}
temp->next = newNode; // Insert at the last.
}
// Function to print the nodes of the linked list.
void Linkedlist::printList()
{
Node *temp = head;
if (head == NULL)
{
// Check for empty list.
cout << "List empty" << endl;
return;
}
// Traverse the list.
while (temp != NULL)
{
cout << temp->data << " ";
temp = temp->next;
}
}
// Driver Code
int main()
{
Linkedlist list;
// Inserting nodes
[Link](1);
[Link](2);
[Link](3);
[Link](4);
cout << "Elements of the list are: ";
// Print the list
[Link]();
cout << endl;
// Delete node at position 2.
[Link](2);
cout << "Elements of the list are: ";
[Link]();
cout << endl;
return 0;
}
Ques 3.
List the area of applications of Data Structure.
Answer:
• A data structure is a group of data elements are collected together under a single
name and is defined by a specific storing and organisation structure inside a
computer memory.
• It can also said as the mathematical model of organising data in a computer memory
and methods to methods to process them
• They are mainly of two types:
• Linear Data structure: Here the Data elements are organised in a sequence of some
manner.
• Non-linear Data structure: Here the data is ordered in any arbitrary order and not in
a sequence
• Some common data structures are
• Linked list
• Arrays
• Stacks
• Queues
• Binary trees
• Hash tables
Areas of Application
• Data structures are used in any program or software.
• They are used in the areas of
• Compiler Design
• Operating System
• DBMS
• Graphics
• Simulation
• Numerical Analysis
• Artificial Intelligence
Ques 4.
Discuss the best case, worst case, the average case, and amortized time complexity of an
algorithm
Answer:
• Worst case Running Time: The behaviour of the algorithm with respect to the worst
possible case of the input instance. The worst-case running time of an algorithm is an
upper bound on the running time for any input. Knowing it gives us a guarantee that
the algorithm will never take any longer. There is no need to make an educated
guess about the running time.
• Average case Running Time: The expected behaviour when the input is randomly
drawn from a given distribution. The average-case running time of an algorithm is an
estimate of the running time for an "average" input. Computation of average-case
running time entails knowing all possible input sequences, the probability
distribution of occurrence of these sequences, and the running times for the
individual sequences. Often it is assumed that all inputs of a given size are equally
likely.
• Amortized Running Time Here the time required to perform a sequence of (related)
operations is averaged over all the operations performed. Amortized analysis can be
used to show that the average cost of an operation is small, if one averages over a
sequence of operations, even though a simple operation might be expensive.
Amortized analysis guarantees the average performance of each operation in the
worst case.
1.
For example, consider the problem of finding the minimum element in a list of elements.
Worst case = O(n)
Average case = O(n)
2.
Quick sort
Worst case = O(n 2)
Average case = O(n log n)
3.
Merge Sort, Heap Sort
Worst case = O(n log n)
Average case = O(n log n)
4.
Bubble sort
Worst case = O(n 2)
Average case = O(n 2)
5.
Binary Search Tree: Search for an element Worst case = O(n)
Average case = O(log n)
Ques 5.
Calculate the address of a random element present in a 2D array, given the base address as
BA.
Answer:
Row-Major Order: If array is declared as a[m][n] where m is the number of rows while n is
the number of columns, then address of an element a[i][j] of the array stored in row major
order is calculated as,
Address(a[i][j]) = B. A. + (i * n + j) * size
Column-Major Order: If array is declared as a[m][n] where m is the number of rows while n
is the number of columns, then address of an element a[i][j] of the array stored in column
major order is calculated as
Address(a[i][j]) = ((j*m)+i)*Size + BA.
Ques 6.