Data Structures Using C by Vishal Kushwaha
Data Structures Using C by Vishal Kushwaha
1|Page
1. Sorting Algorithms-Non-Recursive.
As we know, the merge sort algorithm is an efficient sorting algorithm that enables
us to sort an array within time complexity, where is the
number of values.
Usually, we find that the recursive approach more widespread. Thus, let’s quickly
remember the steps of the recursive algorithm so that it’s easier for us to
understand the iterative one later. The recursive version is based on the divide and
conquers strategy:
Divide: In this step, we divide the input into two halves, the pivot being
the midpoint of the array. This step is carried out recursively for all the half
arrays until there are no more halves to divide.
Conquer: In this step, we sort and merge the divided parts from bottom
to top and get the complete sorted result.
As we showed in the recursive version, we divided the input into two halves. This
process continued until we got each element of the array alone. Then, we merged
the sorted parts from bottom to top until we get the complete result containing all
the values sorted.
As usual, when we’re trying to think about moving from a recursive version to an
iterative one, we have to think in the opposite way of the recursion. Let’s list a few
thoughts that will help us implement the iterative approach:
1. Consider each element of the array as a sorted part. As a start, this part
contains a single value.
2. In the second step, merge every two adjacent elements, such that we get
sorted parts. In the beginning, each part has two values. However, the last
part may contain less than two values if the number of parts was odd.
3. Keep performing steps 1 and 2 until the size of the part reaches the entire
array. By then, we can say that the result is sorted.
Now, we can jump into the implementation. However, to simplify the algorithm,
we’ll first implement a function responsible for merging two adjacent parts. After
2|Page
that, we’ll see how to implement the complete algorithm based on the merge
function.
Let’s implement a simple function that merges two sorted parts and returns the
merged sorted array, which contains all elements in the first and second parts. Take
a look at the implementation:
In the merging function, we use three loops. The first one is to iterate over
the two parts together. In each step, we take the smaller value from both parts and
store it inside the array that will hold the final answer.
Once we add the value to the resulting , we move one step forward. The
variable points to the index that should hold the next value to be added
to .
In the second loop, we iterate over the remaining elements from the first
part. We store each value inside . In the third loop, we perform a similar
operation to the second loop. However, here we iterate over the remaining
elements from the second part.
The second and third loops are because after the first loop ends, we
might have remaining elements in one of the parts. Since all of these values are
larger than the added ones, we should add them to the resulting answer.
Now let’s use the merge function to implement the merge sort iterative approach.
Take a look at the implementation:
3|Page
Firstly, we start from that indicates the size of each part the algorithm handles
at this step.
In each step, we iterate over all parts of size and calculated the beginning and
end of each two adjacent parts. Once we determined both parts, we merged them
using the function defined in algorithm 1.
Note that we handled two special cases. The first one is if reaches the outside of
the array, while the second one is when reaches the outside. The reason for
these cases is that the last part may contain less than elements. Therefore, we
adjust its size so that it doesn’t exceed .
After the merging ends, we copy the elements from into their respective
places in .
Note that in each step, we doubled the length of a single part . The reason is
that we merged two parts of length . So, for the next step, we know that all parts
of the size are now sorted.
Finally, we return the sorted .
4. Example
Let’s take an example to understand the iterative version more clearly. Suppose we
have an array as follows:
4|Page
In the first step, we’ll be merging the element with the one. As a result, we’ll
get a new sorted part which contains the first two values. Since the is smaller
than the , we’ll swap both of them.
Similarly, we perform this operation for the and elements. However, in this
case, they’re already sorted, and we keep their order. We continue this operation
for the and values as well.
After these steps, will have its parts of size 2 sorted as follows:
In the second step, the becomes 2. Hence, we have three parts. Each of these
parts contains two elements. Similarly to the previous steps, we have to merge the
first and the second parts. Thus, we get a new part that contains the first four
values. For the third part, we just skip it because it was sorted in the previous step,
and we don’t have a fourth part of merging it with.
Take a look at the result after these steps:
5|Page
In the last step, the becomes 4, and we have two parts. The first one contains
four elements, whereas the second one contains two. We call the merging function
for these parts.
After the merging ends, we get the final sorted answer:
5. Comparison
As usual, recursion reduces the size of the code, and it’s easier to think about and
implement. On the other hand, it takes more memory because it uses
the stack memory which is slow in execution.
For that, we prefer to use the iterative algorithm because of its speed and memory
saving. However, if we don’t care about execution time and memory as if we have a
small array, for example, we can use a recursive version.
6|Page
6. Conclusion
In this tutorial, we explained how to implement the merge sort algorithm using an
iterative approach. Firstly, we discussed the merge sort algorithm using the
recursive version of the algorithm.
Then, we discussed the iterative version of this algorithm. Also, we explained the
reason behind its complexity.
After that, we provided a simple example to clarify the idea well.
2. Sorting Algorithms-Recursive.
I’m going to present pretty much all of the sorting algorithms recursively, so we
should probably talk about recursion. Recursion is a really mind-expanding
technique, once you get the hang of it. It’s also the foundation for what could be
called the “mathematical interpretation” of computer programming, so if you’re a
CSci major, you’ll have to get comfortable with it sooner or later. So let’s look at
some simple algorithms, both the iteratively (using loops) and recursively.
int r = 1; // Factorial of 0 is 1
r *= i;
return r;
To write this, or any other algorithm, recursively, we have to ask two questions:
What is the smallest case, the case where I can give the answer right away? This
is called the “base case”. (Sometimes there might be more than one smallest
case, and that’s OK.)
7|Page
For anything that is not the smallest case, how do I break it down to make it
smaller? This is called the recursive case.
For the factorial, the base case is what happens when n=0n=0: the loop doesn’t run
at all, and 1 is returned. So we can start our recursive version with
int factorial_rec(int n) {
if(n == 0)
return 1;
else
...
To construct the recursive case, we need to look at what happens when n > 0 . In
particular, how can we break n!n! down into some n′!,n′<nn′!,n′<n? The most
common case is n′=n−1n′=n−1.
One way to look at this is to assume that we already have the value of (n−1)!(n−1)!,
and we want to get n!n! from it. That is, assume that factorial_rec(n - 1) will work
and give us the right answer; we just need to construct the factorial of n from it.
How can we do this? n!=n(n−1)!n!=n(n−1)!. So we write our recursive case like this:
int fact(int n) {
if(n == 0)
return 1;
else
fact(3) = 3 * fact(2)
fact(2) = 2 * fact(1)
8|Page
fact(1) = 1 * fact(0)
fact(0) = 1 and at this point we can work our way back up, giving the result 3 *
2 * 1 * 1 = 6.
Inductive proof
How do we show that a function does what it is supposed to do? We could test it,
running it thousands or millions of times and verifying that its output is what we
expect, but this requires us to come up with an independent way to define what the
function does (e.g., a different way of computing the factorial), which might itself be
incorrect, and furthermore, repeated testing can only ever give us
a statistical confidence that our algorithm is correct. If we want to be sure, then we
need a logical, or mathematical proof that it is correct. For recursive functions, this
often takes the form of proof by induction. An inductive proof is kind of the
mathematical equivalent to a recursive function. Like a recursive function it has
base case(s) (one base case, in fact, for every base case in the function), and the
base cases are usually easy. It also has inductive case(s) (one for each recursive case
in the function), which are somewhat more tricky, but allow us to do something like
recursion.
Consider the example above. We want to prove that fact(n) = n!n!, where the
definition of n!=n(n−1)(n−2)…(2)(1),0!=1n!=n(n−1)(n−2)…(2)(1),0!=1.
Proof by induction on n (whatever variable we do the recursion on, we say we are
doing “proof by induction” on that variable):
n⋅fact(n−1)=n!=n(n−1)(n−2)…(2)(1)n⋅fact(n−1)=n!=n(n−1)(n−2)…(2)(1)
fact(n−1)=(n−1)(n−2)…(2)(1)=(n−1)!fact(n−1)=(n−1)(n−2)…(2)(1)=(n−1)!
In other words, we have reduced the problem of proving that fact(n) = n!n! to the
problem of proving that fact(n-1) = (n−1)!(n−1)!. That doesn’t seem useful, but as
9|Page
in a recursive function, where we can call the function itself with a smaller
argument, in an inductive proof we can reuse the proof itself as an assumption,
for a smaller nn. We call this assumption the inductive hypothesis and it looks
like this:
Assumefact(n−1)=(n−1)!Assumefact(n−1)=(n−1)!
which is exactly what we needed above! Substituting in this for the above we get
(n−1)!=(n−1)!(n−1)!=(n−1)!
Like recursion, the heart of an inductive proof is the act of applying the proof itself
as an assumption about “smaller” values (n′<nn′<n). Technically, there are two kinds
of inductive proofs:
“Natural number induction” only lets us make the assumption
about n′=n−1n′=n−1. That is, we can only make the assumption about an “input”
that is one smaller than the original.
“Strong induction” lets us use any n′<nn′<n. You can use strong induction
anywhere where you can use natural number induction, but it isn’t always
required.
The integer exponent calculation
10 | P a g e
float powi(float b, int n) {
if(n == 0)
return 1;
else if(n % 2 == 0) {
// Even
return fp * fp;
This has the same complexity as the loop-based version, and is arguably simpler.
o Case 1, nn is even. Then replacing the call to powi by its return value we
have
powi(b,n/2)2=bnpowi(b,n/2)2=bn
11 | P a g e
powi(b,n/2)2=(bn/2)2powi(b,n/2)2=(bn/2)2
powi(b,n/2)=bn/2powi(b,n/2)=bn/2
bn/2=bn/2bn/2=bn/2
b⋅powi(b,n−1)=b⋅bn−1b⋅powi(b,n−1)=b⋅bn−1
Mutual recursion
bool is_even(int n) {
if(n == 0)
return true;
else if(n == 1)
return false;
else
12 | P a g e
return is_odd(n - 1);
bool is_odd(int n) {
if(n == 0)
return false;
else if(n == 1)
return true;
else
If we track out the processing of determining is_even(4) , we’ll see that it bounces
back and forth between is_even and is_odd .
Binary search
There are two base cases: when we find the item, or when the search space is
reduced to 0 (indicating that the item is not found).
The recursive case compares the value of the target to the value at the current
midpoint, and then reduces the size of the search space (by recursively
searching either the left or right sides).
template<typename T>
int low = 0,
13 | P a g e
int high = data.size()-1,
return -1;
if(data.at(mid) == target)
return mid;
Other examples: Counting the number of copies in a vector. For any vector-style
recursion, we need to keep track of our “starting place” within the vector. This is
because we can’t make the vector itself smaller, so we have to put a marker into it
showing where we are starting. We can do this in two ways, with an int
start parameter, or by using iterators.
template<typename T>
if(start == data.size())
return 0;
14 | P a g e
else
With iterators:
if(start == finish)
return 0;
else
Sorting algorithms
There may be duplicates in the original, if so, there should be an equal number
of duplicates in the output.
15 | P a g e
There are some terms associated with sorting that it’s important to be aware of:
Stability – when the input sequence has element which compare as equal but
which are distinct (e.g., employees with identical names but otherwise different
people) the question arises as to whether, in the output sequence, they occur in
the same order as in the original. E.g., if employee “John Smith” #123 came
before “John Smith” #234 in the original sequence, then we say that a sort
is stable if #123 is before #234 in the result.
Adaptability – some input sequences are already partially (or completely) sorted;
an adaptable sorting algorithm will run faster (in big-O terms) on partially sorted
inputs. The optimal runtime for a completely sorted input is O(n)O(n), the time
that it takes to verify that the input is already sorted.
In-Place – an in-place sorting algorithm is one that needs no extra space (i.e., it’s
space complexity is O(1)O(1)) to sort. Some algorithms cannot be used in place,
and have to construct a separate output sequence of the same size as the
input, to write the results into.
Online – some datasets are too large to fit into memory at all. An online sorting
algorithm is one that requires all its data to be accessible (in memory). Offline
sorting algorithms can sort data which is partially in memory, partially on disk
or other “slow” storage, without affecting their time complexity.
Selection sort
To selection sort a list of items, we first find the smallest item in the entire list,
and put it at the beginning.
Then we find the smallest item in everything after the first item, and put it
second.
Effectively, selection sort splits the list into the sorted region at the beginning, and
the unsorted region at the end. The sorted region grows, while the unsorted region
shrinks.
16 | P a g e
Selection sort is not stable.
template<typename T>
// Find smallest
smallest = jt;
swap(it, smallest);
Let’s trace through this on a small example to get a feel for how it works.
template<typename Iterator>
...
17 | P a g e
}
The base case is when last == first + 1 . That means there’s only 1 element, and
a 1-element list is always sorted.
The recursive case is when last - first > 1 . In that case, we recursively break it
down by
template<typename Iterator>
if(last - first == 1)
else {
// Find minimum
smallest = it;
swap(*smallest, *first);
18 | P a g e
selection_sort(first + 1, last);
Let’s draw the recursion tree for this. We won’t trace through the loop, we’ll just
assume (for now) that it works correctly.
DIAGRAM
3. Searching Algorithm
19 | P a g e
Binary Search to find the element “23” in a given list of numbers
Stack is a linear data structure that follows a particular order in which the operations
are performed. The order may be LIFO(Last In First Out) or FILO(First In Last Out).
Mainly the following three basic operations are performed in the stack:
Push: Adds an item in the stack. If the stack is full, then it is said to be an
Overflow condition.
20 | P a g e
Pop: Removes an item from the stack. The items are popped in the reversed
order in which they are pushed. If the stack is empty, then it is said to be an
Underflow condition.
Peek or Top: Returns the top element of the stack.
isEmpty: Returns true if the stack is empty, else false.
Applications of stack:
Balancing of symbols
Infix to Postfix /Prefix conversion
Redo-undo features at many places like editors, photoshop.
Forward and backward feature in web browsers
Used in many algorithms like Tower of Hanoi, tree traversals, stock span
problem, histogram problem.
21 | P a g e
Backtracking is one of the algorithm designing techniques. Some examples of
backtracking are the Knight-Tour problem, N-Queen problem, find your way
through a maze, and game-like chess or checkers in all these problems we dive
into someway if that way is not efficient we come back to the previous state and
go into some another path. To get back from a current state we need to store
the previous state for that purpose we need a stack.
In Graph Algorithms like Topological Sorting and Strongly Connected
Components
In Memory management, any modern computer uses a stack as the primary
management for a running purpose. Each program that is running in a
computer system has its own memory allocations
String reversal is also another application of stack. Here one by one each
character gets inserted into the stack. So the first character of the string is on
the bottom of the stack and the last element of a string is on the top of the
stack. After Performing the pop operations on the stack we get a string in
reverse order.
Implementation:
There are two ways to implement a stack:
Using array
Using linked list
operations */
#include <bits/stdc++.h>
22 | P a g e
#define MAX 1000
class Stack {
int top;
public:
int pop();
int peek();
bool isEmpty();
};
bool Stack::push(int x)
23 | P a g e
{
return false;
else {
a[++top] = x;
return true;
int Stack::pop()
if (top < 0) {
return 0;
24 | P a g e
}
else {
int x = a[top--];
return x;
int Stack::peek()
if (top < 0) {
return 0;
else {
int x = a[top];
return x;
25 | P a g e
bool Stack::isEmpty()
int main()
class Stack s;
s.push(10);
s.push(20);
s.push(30);
while(!s.isEmpty())
26 | P a g e
{
cout<<s.peek()<<" ";
s.pop();
return 0;
27 | P a g e
left by one position in order for the dequeue operation to delete the second
element from the left on another dequeue operation.
3. Front: Get the front element from the queue i.e. arr[front] if queue is not empty.
4. Display: Print all element of the queue. If the queue is non-empty, traverse and
print all the elements from index front to rear.
C++
Java
Python3
C#
28 | P a g e
Javascript
#include <bits/stdc++.h>
struct Queue {
int* queue;
Queue(int c)
front = rear = 0;
capacity = c;
29 | P a g e
// function to insert an element
if (capacity == rear) {
printf("\nQueue is full\n");
return;
else {
queue[rear] = data;
rear++;
return;
30 | P a g e
// function to delete an element
void queueDequeue()
// if queue is empty
if (front == rear) {
printf("\nQueue is empty\n");
return;
else {
31 | P a g e
// decrement rear
rear--;
return;
void queueDisplay()
int i;
if (front == rear) {
printf("\nQueue is Empty\n");
return;
32 | P a g e
for (i = front; i < rear; i++) {
return;
void queueFront()
if (front == rear) {
printf("\nQueue is Empty\n");
return;
return;
};
33 | P a g e
// Driver code
int main(void)
Queue q(4);
q.queueDisplay();
q.queueEnqueue(20);
q.queueEnqueue(30);
q.queueEnqueue(40);
q.queueEnqueue(50);
34 | P a g e
q.queueDisplay();
q.queueEnqueue(60);
q.queueDisplay();
q.queueDequeue();
q.queueDequeue();
q.queueDisplay();
35 | P a g e
q.queueFront();
return 0;
A circular queue is the extended version of a regular queue where the last
element is connected to the first element. Thus forming a circle-like structure.
The circular queue solves the major limitation of the normal queue. In a normal
queue, after a bit of insertion and deletion, there will be non-usable empty space.
36 | P a g e
Limitation of the regular Queue
Here, indexes 0 and 1 can only be used after resetting the queue (deletion of all
elements). This reduces the actual size of the queue.
Here, the circular increment is performed by modulo division with the queue size.
That is,
37 | P a g e
The circular queue work as follows:
2. Dequeue Operation
Ad
However, the check for full queue has a new additional case:
The second case happens when REAR starts from 0 due to circular increment and
when its value is just 1 less than FRONT , the queue is full.
38 | P a g e
39 | P a g e
Circular Queue Implementations in Python, Java, C, and
C++
The most common queue implementation is using arrays, but it can also be
implemented using lists.
Pytho
class MyCircularQueue():
40 | P a g e
print("The circular queue is empty\n")
def printCQueue(self):
if(self.head == -1):
print("No element in the circular queue")
obj.dequeue()
print("After removing an element from the queue")
obj.printCQueue()
41 | P a g e
7. Implementation of Stack using Linked List.
The major problem with the stack implemented using an array is, it works only for a
fixed number of data values. That means the amount of data must be specified at
the beginning of the implementation itself. Stack implemented using an array is not
suitable, when we don't know the size of data which we are going to use. A stack data
structure can be implemented by using a linked list data structure. The stack
implemented using linked list can work for an unlimited number of values. That
means, stack implemented using linked list works for the variable size of data. So,
there is no need to fix the size at the beginning of the implementation. The Stack
implemented using linked list can organize as many data values as we want.
element. That means every newly inserted element is pointed by 'top'. Whenever we
want to remove an element from the stack, simply remove the node which is pointed
by 'top' by moving 'top' to its previous node in the list. The next field of the first
42 | P a g e
Example
In the above example, the last inserted node is 99 and the first inserted node is 25.
To implement a stack using a linked list, we need to set the following things before
Step 1 - Include all the header files which are used in the program. And
Step 2 - Define a 'Node' structure with two members data and next.
We can use the following steps to insert a new node into the stack...
43 | P a g e
Step 1 - Create a newNode with given value.
We can use the following steps to delete a node from the stack...
Step 3 - If it is Not Empty, then define a Node pointer 'temp' and set it to
'top'.
We can use the following steps to display the elements (nodes) of a stack...
function.
Step 3 - If it is Not Empty, then define a Node pointer 'temp' and initialize
with top.
Step 4 - Display 'temp → data --->' and move it to the next node. Repeat the
same until temp reaches to the first node in the stack. (temp →
next != NULL).
44 | P a g e
Step 5 - Finally! Display 'temp → data ---> NULL'.
#include<stdio.h>
#include<conio.h>
struct Node
{
int data;
struct Node *next;
}*top = NULL;
void push(int);
void pop();
void display();
void main()
{
int choice, value;
clrscr();
printf("\n:: Stack using Linked List ::\n");
while(1){
printf("\n****** MENU ******\n");
printf("1. Push\n2. Pop\n3. Display\n4. Exit\n");
printf("Enter your choice: ");
scanf("%d",&choice);
switch(choice){
45 | P a g e
case 1: printf("Enter the value to be insert: ");
scanf("%d", &value);
push(value);
break;
case 2: pop(); break;
case 3: display(); break;
case 4: exit(0);
default: printf("\nWrong selection!!! Please try again!!!\n");
}
}
}
void push(int value)
{
struct Node *newNode;
newNode = (struct Node*)malloc(sizeof(struct Node));
newNode->data = value;
if(top == NULL)
newNode->next = NULL;
else
newNode->next = top;
top = newNode;
printf("\nInsertion is Success!!!\n");
}
void pop()
{
if(top == NULL)
printf("\nStack is Empty!!!\n");
46 | P a g e
else{
struct Node *temp = top;
printf("\nDeleted element: %d", temp->data);
top = temp->next;
free(temp);
}
}
void display()
{
if(top == NULL)
printf("\nStack is Empty!!!\n");
else{
struct Node *temp = top;
while(temp->next != NULL){
printf("%d--->",temp->data);
temp = temp -> next;
}
printf("%d--->NULL",temp->data);
}
}
47 | P a g e
In a Queue data structure, we maintain two pointers, front and rear.
The front points the first item of queue and rear points to last item.
enQueue() This operation adds a new node after rear and moves rear to the next
node.
deQueue() This operation removes the front node and moves front to the next
node.
#include <bits/stdc++.h>
struct QNode {
int data;
QNode* next;
QNode(int d)
data = d;
next = NULL;
};
struct Queue {
48 | P a g e
QNode *front, *rear;
Queue()
void enQueue(int x)
if (rear == NULL) {
return;
49 | P a g e
// Add the new node at
rear->next = temp;
rear = temp;
// Function to remove
void deQueue()
if (front == NULL)
return;
front = front->next;
50 | P a g e
// If front becomes NULL, then
if (front == NULL)
rear = NULL;
delete (temp);
};
// Driven Program
int main()
Queue q;
q.enQueue(10);
q.enQueue(20);
q.deQueue();
q.deQueue();
q.enQueue(30);
51 | P a g e
q.enQueue(40);
q.enQueue(50);
q.deQueue();
Output:
Queue Front : 40
Queue Rear : 50
Prerequisite – Queues
Circular Queue is a linear data structure in which the operations are performed
based on FIFO (First In First Out) principle and the last position is connected back to
the first position to make a circle. It is also called ‘Ring Buffer’.
52 | P a g e
In a normal Queue, we can insert elements until queue becomes full. But once
queue becomes full, we can not insert the next element even if there is a space in
front of queue.
53 | P a g e
// C or C++ program for insertion and
#include<bits/stdc++.h>
class Queue
// Circular Queue
int size;
int *arr;
public:
Queue(int s)
54 | P a g e
front = rear = -1;
size = s;
int deQueue();
void displayQueue();
};
(rear == (front-1)%(size-1)))
55 | P a g e
printf("\nQueue is Full");
return;
front = rear = 0;
arr[rear] = value;
rear = 0;
arr[rear] = value;
else
56 | P a g e
{
rear++;
arr[rear] = value;
int Queue::deQueue()
if (front == -1)
printf("\nQueue is Empty");
return INT_MIN;
arr[front] = -1;
57 | P a g e
if (front == rear)
front = -1;
rear = -1;
front = 0;
else
front++;
return data;
// of Circular Queue
void Queue::displayQueue()
58 | P a g e
if (front == -1)
printf("\nQueue is Empty");
return;
printf("%d ",arr[i]);
else
59 | P a g e
printf("%d ", arr[i]);
int main()
Queue q(5);
q.enQueue(14);
q.enQueue(22);
q.enQueue(13);
q.enQueue(-6);
q.displayQueue();
60 | P a g e
// Deleting elements from Circular Queue
q.displayQueue();
q.enQueue(9);
q.enQueue(20);
q.enQueue(5);
q.displayQueue();
q.enQueue(20);
return 0;
61 | P a g e
10. Implementation of Tree Structures, Binary Tree, Tree Traversal, Binary
Search Tree, Insertion and Deletion
We read the linear data structures like an array, linked list, stack and queue in which
all the elements are arranged in a sequential manner. The different data structures
are used for different kinds of data.
A tree is also one of the data structures that represent hierarchical data. Suppose we
want to show the employees and their positions in the hierarchical form then it can
be represented as shown below:
62 | P a g e
The above tree shows the organization hierarchy of some company. In the above
structure, john is the CEO of the company, and John has two direct reports named
as Steve and Rohan. Steve has three direct reports named Lee, Bob,
Ella where Steve is a manager. Bob has two direct reports
named Sal and Emma. Emma has two direct reports named Tom and Raj. Tom has
one direct report named Bill. This particular logical structure is known as a Tree. Its
structure is similar to the real tree, so it is named a Tree. In this structure, the root is
at the top, and its branches are moving in a downward direction. Therefore, we can
say that the Tree data structure is an efficient way of storing the data in a hierarchical
way.
38.3M
632
63 | P a g e
o A tree data structure is defined as a collection of objects or entities known as
nodes that are linked together to represent or simulate hierarchy.
o A tree data structure is a non-linear data structure because it does not store
in a sequential manner. It is a hierarchical structure as elements in a Tree are
arranged in multiple levels.
o In the Tree data structure, the topmost node is known as a root node. Each
node contains some data, and data can be of any type. In the above tree
structure, the node contains the name of the employee, so the type of data
would be a string.
o Each node contains some data and the link or reference of other nodes that
can be called children.
64 | P a g e
In the above structure, each node is labeled with some number. Each arrow shown
in the above figure is known as a link between the two nodes.
o Root: The root node is the topmost node in the tree hierarchy. In other words,
the root node is the one that doesn't have any parent. In the above structure,
node numbered 1 is the root node of the tree. If a node is directly linked to
some other node, it would be called a parent-child relationship.
o Child node: If the node is a descendant of any node, then the node is known
as a child node.
o Parent: If the node contains any sub-node, then that node is said to be the
parent of that sub-node.
o Sibling: The nodes that have the same parent are known as siblings.
o Leaf Node:- The node of the tree, which doesn't have any child node, is called
a leaf node. A leaf node is the bottom-most node of the tree. There can be any
number of leaf nodes present in a general tree. Leaf nodes can also be called
external nodes.
o Internal nodes: A node has atleast one child node known as an internal
o Ancestor node:- An ancestor of a node is any predecessor node on a path
from the root to that node. The root node doesn't have any ancestors. In the
tree shown in the above image, nodes 1, 2, and 5 are the ancestors of node
10.
o Descendant: The immediate successor of the given node is known as a
descendant of a node. In the above figure, 10 is the descendant of node 5.
65 | P a g e
applications.
o Number of edges: If there are n nodes, then there would n-1 edges. Each
arrow in the structure represents the link or path. Each node, except the root
node, will have atleast one incoming link known as an edge. There would be
one link for the parent-child relationship.
o Depth of node x: The depth of node x can be defined as the length of the path
from the root to the node x. One edge contributes one-unit length in the path.
So, the depth of node x can also be defined as the number of edges between
the root node and the node x. The root node has 0 depth.
o Height of node x: The height of node x can be defined as the longest path
from the node x to the leaf node.
Based on the properties of the Tree data structure, trees are classified into various
categories.
Implementation of Tree
The tree data structure can be created by creating the nodes dynamically with the
help of the pointers. The tree in the memory can be represented as shown below:
66 | P a g e
The above figure shows the representation of the tree data structure in the memory.
In the above structure, the node contains three fields. The second field stores the
data; the first field stores the address of the left child, and the third field stores the
address of the right child.
1. struct node
2. {
3. int data;
4. struct node *left;
5. struct node *right;
6. }
The above structure can only be defined for the binary trees because the binary tree
can have utmost two children, and generic trees can have more than two children.
The structure of the node for generic trees would be different as compared to the
binary tree.
Applications of trees
67 | P a g e
The following are the applications of trees:
o Storing naturally hierarchical data: Trees are used to store the data in the
hierarchical structure. For example, the file system. The file system stored on
the disc drive, the file and folder are in the form of the naturally hierarchical
data and stored in the form of trees.
o Organize data: It is used to organize data for efficient insertion, deletion and
searching. For example, a binary tree has a logN time for searching an
element.
o Trie: It is a special kind of tree that is used to store the dictionary. It is a fast
and efficient way for dynamic spell checking.
o Heap: It is also a tree data structure implemented using arrays. It is used to
implement priority queues.
o B-Tree and B+Tree: B-Tree and B+Tree are the tree data structures used to
implement indexing in databases.
o Routing table: The tree data structure is also used to store the data in routing
tables in the routers.
o General tree: The general tree is one of the types of tree data structure. In
the general tree, a node can have either 0 or maximum n number of nodes.
There is no restriction imposed on the degree of the node (the number of
nodes that a node can contain). The topmost node in a general tree is known
as a root node. The children of the parent node are known as subtrees.
68 | P a g e
There can be n number of subtrees in a general tree. In the general tree, the
subtrees are unordered as the nodes in the subtree cannot be ordered.
Every non-empty tree has a downward edge, and these edges are connected
to the nodes known as child nodes. The root node is labeled with level 0. The
nodes that have the same parent are known as siblings.
o Binary tree
: Here, binary name itself suggests two numbers, i.e., 0 and 1. In a binary tree,
each node in a tree can have utmost two child nodes. Here, utmost means
whether the node has 0 nodes, 1 node or 2 nodes.
69 | P a g e
To know more about the binary tree, click on the link given below:
70 | P a g e
A node can be created with the help of a user-defined data type known as struct, as
shown below:
1. struct node
2. {
3. int data;
4. struct node *left;
5. struct node *right;
6. }
The above is the node structure with three fields: data field, the second field is the
left pointer of the node type, and the third field is the right pointer of the node type.
Unlike linear data structures (Array, Linked List, Queues, Stacks, etc) which have
only one logical way to traverse them, trees can be traversed in different ways.
Following are the generally used ways for traversing trees.
71 | P a g e
Depth First Traversals:
(a) Inorder (Left, Root, Right) : 4 2 5 1 3
(b) Preorder (Root, Left, Right) : 1 2 4 5 3
(c) Postorder (Left, Right, Root) : 4 5 2 3 1
Breadth-First or Level Order Traversal: 1 2 3 4 5
Please see this post for Breadth-First Traversal.
72 | P a g e
3. Visit the root.
Uses of Postorder
Postorder traversal is used to delete the tree. Please see the question for the
deletion of a tree for details. Postorder traversal is also useful to get the postfix
expression of an expression tree. Please
see http://en.wikipedia.org/wiki/Reverse_Polish_notation for the usage of postfix
expression.
Example: Postorder traversal for the above-given figure is 4 5 2 3 1.
#include <iostream>
struct Node {
int data;
};
73 | P a g e
Node* newNode(int data)
temp->data = data;
return temp;
if (node == NULL)
return;
printPostorder(node->left);
74 | P a g e
// then recur on right subtree
printPostorder(node->right);
if (node == NULL)
return;
printInorder(node->left);
75 | P a g e
/* then print the data of node */
printInorder(node->right);
if (node == NULL)
return;
76 | P a g e
printPreorder(node->left);
printPreorder(node->right);
int main()
root->left = newNode(2);
root->right = newNode(3);
root->left->left = newNode(4);
root->left->right = newNode(5);
printPreorder(root);
77 | P a g e
cout << "\nInorder traversal of binary tree is \n";
printInorder(root);
printPostorder(root);
return 0;
Insertion:
78 | P a g e
When no position is specified, it’s best to insert the element at the end to avoid
shifting, and this is when we achieve the best runtime O(1).
Deletion:
79 | P a g e
11. Graph Implementation, BFS, DFS, Minimum cost spanning tree, shortest path
algorithm.
80 | P a g e
A Graph is a non-linear data structure consisting of nodes and edges. The nodes are sometimes also
referred to as vertices and the edges are lines or arcs that connect any two nodes in the graph. More
formally a Graph can be defined as,
A Graph consists of a finite set of vertices(or nodes) and set of Edges which connect a pair of
nodes.
In the above Graph, the set of vertices V = {0,1,2,3,4} and the set of edges E = {01, 12, 23, 34, 04,
14, 13}.
Graphs are used to solve many real-life problems. Graphs are used to represent networks. The
networks may include paths in a city or telephone network or circuit network. Graphs are also used
in social networks like linkedIn, Facebook. For example, in Facebook, each person is represented
with a vertex(or node). Each node is a structure and contains information like person id, name,
gender, locale etc.
In this code, we are using the adjacency list to represent our graph. Implementing the
Breadth-First Search algorithm in Java makes it much easier to deal with the adjacency list
since we only have to travel through the list of nodes attached to each node once the
node is dequeued from the head (or start) of the queue.
In this example, the graph that we are using to demonstrate the code is given as follows
-
81 | P a g e
1. import java.io.*;
2. import java.util.*;
3. public class BFSTraversal
4. {
5. private int vertex; /* total number number of vertices in the graph */
6. private LinkedList<Integer> adj[]; /* adjacency list */
7. private Queue<Integer> que; /* maintaining a queue */
8. BFSTraversal(int v)
9. {
10. vertex = v;
11. adj = new LinkedList[vertex];
12. for (int i=0; i<v; i++)
13. {
14. adj[i] = new LinkedList<>();
15. }
16. que = new LinkedList<Integer>();
17. }
18. void insertEdge(int v,int w)
19. {
20. adj[v].add(w); /* adding an edge to the adjacency list (edges are bidirectional in
this example) */
21. }
22. void BFS(int n)
82 | P a g e
23. {
24. boolean nodes[] = new boolean[vertex]; /* initialize boolean array for holding
the data */
25. int a = 0;
26. nodes[n]=true;
27. que.add(n); /* root node is added to the top of the queue */
28. while (que.size() != 0)
29. {
30. n = que.poll(); /* remove the top element of the queue */
31. System.out.print(n+" "); /* print the top element of the queue */
32. for (int i = 0; i < adj[n].size(); i++) /* iterate through the linked list and push all
neighbors into queue */
33. {
34. a = adj[n].get(i);
35. if (!nodes[a]) /* only insert nodes into queue if they have not been explore
d already */
36. {
37. nodes[a] = true;
38. que.add(a);
39. }
40. }
41. }
42. }
43. public static void main(String args[])
44. {
45. BFSTraversal graph = new BFSTraversal(10);
46. graph.insertEdge(0, 1);
47. graph.insertEdge(0, 2);
48. graph.insertEdge(0, 3);
49. graph.insertEdge(1, 3);
50. graph.insertEdge(2, 4);
51. graph.insertEdge(3, 5);
52. graph.insertEdge(3, 6);
53. graph.insertEdge(4, 7);
83 | P a g e
54. graph.insertEdge(4, 5);
55. graph.insertEdge(5, 2);
56. graph.insertEdge(6, 5);
57. graph.insertEdge(7, 5);
58. graph.insertEdge(7, 8);
59. System.out.println("Breadth First Traversal for the graph is:");
60. graph.BFS(2);
61. }
62. }
Output
Conclusion
In this article, we have discussed the Breadth-first search technique along with its example,
complexity, and implementation in java programming language. Here, we have also seen
the real-life applications of BFS that show it the important data structure algorithm.
The data structure which is being used in DFS is stack. The process is similar to BFS
algorithm. In DFS, the edges that leads to an unvisited node are called discovery edges
while the edges that leads to an already visited node are called block edges.
Algorithm
84 | P a g e
o Step 1: SET STATUS = 1 (ready state) for each node in G
o Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting state)
o Step 3: Repeat Steps 4 and 5 until STACK is empty
o Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state)
o Step 5: Push on the stack all the neighbours of N that are in the ready state (whose STATUS
= 1) and set their
STATUS = 2 (waiting state)
[END OF LOOP]
o Step 6: EXIT
Example :
Consider the graph G along with its adjacency list, given in the figure below. Calculate the
order to print all the nodes of the graph starting from node H, by using depth first search
(DFS) algorithm.
Solution :
Push H onto the stack
85 | P a g e
1. STACK : H
POP the top element of the stack i.e. H, print it and push all the neighbours of H onto the
stack that are is ready state.
1. Print H
2. STACK : A
Pop the top element of the stack i.e. A, print it and push all the neighbours of A onto the
stack that are in ready state.
1. Print A
2. Stack : B, D
Pop the top element of the stack i.e. D, print it and push all the neighbours of D onto the
stack that are in ready state.
1. Print D
86 | P a g e
2. Stack : B, F
Pop the top element of the stack i.e. F, print it and push all the neighbours of F onto the
stack that are in ready state.
1. Print F
2. Stack : B
Pop the top of the stack i.e. B and push all the neighbours
1. Print B
2. Stack : C
Pop the top of the stack i.e. C and push all the neighbours.
1. Print C
2. Stack : E, G
Pop the top of the stack i.e. G and push all its neighbours.
87 | P a g e
1. Print G
2. Stack : E
Pop the top of the stack i.e. E and push all its neighbours.
1. Print E
2. Stack :
Hence, the stack now becomes empty and all the nodes of the graph have been traversed.
1. H → A → D → F → B → C → G → E
88 | P a g e
Let's understand the minimum spanning tree with the help of an example.
The sum of the edges of the above graph is 16. Now, some of the possible spanning trees
created from the above graph are -
So, the minimum spanning tree that is selected from the above spanning trees for the
given weighted graph is -
89 | P a g e
Applications of minimum spanning tree
The applications of the minimum spanning tree are given as follows -
o Prim's Algorithm
o Kruskal's Algorithm
Given a graph and a source vertex in the graph, find the shortest paths from the source to
all vertices in the given graph.
90 | P a g e
Dijkstra’s algorithm is very similar to Prim’s algorithm for minimum spanning tree. Like
Prim’s MST, we generate a SPT (shortest path tree) with a given source as a root. We
maintain two sets, one set contains vertices included in the shortest-path tree, other set
includes vertices not yet included in the shortest-path tree. At every step of the algorithm,
we find a vertex that is in the other set (set of not yet included) and has a minimum
distance from the source.
Below are the detailed steps used in Dijkstra’s algorithm to find the shortest path from a
single source vertex to all other vertices in the given graph.
Algorithm
1) Create a set sptSet (shortest path tree set) that keeps track of vertices included in the
shortest-path tree, i.e., whose minimum distance from the source is calculated and
finalized. Initially, this set is empty.
2) Assign a distance value to all vertices in the input graph. Initialize all distance values
as INFINITE. Assign distance value as 0 for the source vertex so that it is picked first.
3) While sptSet doesn’t include all vertices
….a) Pick a vertex u which is not there in sptSet and has a minimum distance value.
….b) Include u to sptSet.
….c) Update distance value of all adjacent vertices of u. To update the distance values,
iterate through all adjacent vertices. For every adjacent vertex v, if the sum of distance
value of u (from source) and weight of edge u-v, is less than the distance value of v, then
update the distance value of v.
91 | P a g e
The set sptSet is initially empty and distances assigned to vertices are {0, INF, INF, INF,
INF, INF, INF, INF} where INF indicates infinite. Now pick the vertex with a minimum
distance value. The vertex 0 is picked, include it in sptSet. So sptSet becomes {0}. After
including 0 to sptSet, update distance values of its adjacent vertices. Adjacent vertices of
0 are 1 and 7. The distance values of 1 and 7 are updated as 4 and 8. The following
subgraph shows vertices and their distance values, only the vertices with finite distance
values are shown. The vertices included in SPT are shown in green colour.
Pick the vertex with minimum distance value and not already included in SPT (not in
sptSET). The vertex 1 is picked and added to sptSet. So sptSet now becomes {0, 1}.
Update the distance values of adjacent vertices of 1. The distance value of vertex 2
becomes 12.
Pick the vertex with minimum distance value and not already included in SPT (not in
sptSET). Vertex 7 is picked. So sptSet now becomes {0, 1, 7}. Update the distance values
of adjacent vertices of 7. The distance value of vertex 6 and 8 becomes finite (15 and 9
respectively).
Pick the vertex with minimum distance value and not already included in SPT (not in
sptSET). Vertex 6 is picked. So sptSet now becomes {0, 1, 7, 6}. Update the distance
92 | P a g e
values of adjacent vertices of 6. The distance value of vertex 5 and 8 are updated.
We repeat the above steps until sptSet includes all vertices of the given graph. Finally, we
get the following Shortest Path Tree (SPT).
We use a boolean array sptSet[] to represent the set of vertices included in SPT. If a value
sptSet[v] is true, then vertex v is included in SPT, otherwise not. Array dist[] is used to
store the shortest distance values of all vertices.
C++
C
Java
Python
C#
Javascript
93 | P a g e
#include <iostream>
#include <limits.h>
#define V 9
// A utility function to find the vertex with minimum distance value, from
94 | P a g e
return min_index;
int dist[V]; // The output array. dist[i] will hold the shortest
95 | P a g e
// path tree or shortest distance from src to i is finalized
dist[src] = 0;
// Pick the minimum distance vertex from the set of vertices not
sptSet[u] = true;
96 | P a g e
for (int v = 0; v < V; v++)
printSolution(dist);
int main()
97 | P a g e
int graph[V][V] = { { 0, 4, 0, 0, 0, 0, 0, 8, 0 },
{ 4, 0, 8, 0, 0, 0, 0, 11, 0 },
{ 0, 8, 0, 7, 0, 4, 0, 0, 2 },
{ 0, 0, 7, 0, 9, 14, 0, 0, 0 },
{ 0, 0, 0, 9, 0, 10, 0, 0, 0 },
{ 0, 0, 4, 14, 10, 0, 2, 0, 0 },
{ 0, 0, 0, 0, 0, 2, 0, 1, 6 },
{ 8, 11, 0, 0, 0, 0, 1, 0, 7 },
{ 0, 0, 2, 0, 0, 0, 6, 7, 0 } };
dijkstra(graph, 0);
return 0;
Output:
Vertex Distance from Source
0 0
1 4
2 12
3 19
98 | P a g e
4 21
5 11
6 9
7 8
8 14
99 | P a g e