0% found this document useful (0 votes)
12 views

Data Structures Using C by Vishal Kushwaha

Uploaded by

vishalk00220
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Data Structures Using C by Vishal Kushwaha

Uploaded by

vishalk00220
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 99

Name - Vishal Kushwaha

Subject - Data structures using c


Student ID – SIETM/0007793
Institute - Sunrise Institute of Engineering
Technology & Management Unnao,
Uttar Pradesh 209801

1|Page
1. Sorting Algorithms-Non-Recursive.

As we know, the merge sort algorithm is an efficient sorting algorithm that enables
us to sort an array within time complexity, where is the
number of values.
Usually, we find that the recursive approach more widespread. Thus, let’s quickly
remember the steps of the recursive algorithm so that it’s easier for us to
understand the iterative one later. The recursive version is based on the divide and
conquers strategy:

 Divide: In this step, we divide the input into two halves, the pivot being
the midpoint of the array. This step is carried out recursively for all the half
arrays until there are no more halves to divide.
 Conquer: In this step, we sort and merge the divided parts from bottom
to top and get the complete sorted result.

3. The Iterative Approach

3.1. General Idea

As we showed in the recursive version, we divided the input into two halves. This
process continued until we got each element of the array alone. Then, we merged
the sorted parts from bottom to top until we get the complete result containing all
the values sorted.
As usual, when we’re trying to think about moving from a recursive version to an
iterative one, we have to think in the opposite way of the recursion. Let’s list a few
thoughts that will help us implement the iterative approach:

1. Consider each element of the array as a sorted part. As a start, this part
contains a single value.
2. In the second step, merge every two adjacent elements, such that we get
sorted parts. In the beginning, each part has two values. However, the last
part may contain less than two values if the number of parts was odd.
3. Keep performing steps 1 and 2 until the size of the part reaches the entire
array. By then, we can say that the result is sorted.

Now, we can jump into the implementation. However, to simplify the algorithm,
we’ll first implement a function responsible for merging two adjacent parts. After

2|Page
that, we’ll see how to implement the complete algorithm based on the merge
function.

3.2. Merge Function

Let’s implement a simple function that merges two sorted parts and returns the
merged sorted array, which contains all elements in the first and second parts. Take
a look at the implementation:

In the merging function, we use three loops. The first one is to iterate over
the two parts together. In each step, we take the smaller value from both parts and
store it inside the array that will hold the final answer.
Once we add the value to the resulting , we move one step forward. The
variable points to the index that should hold the next value to be added
to .
In the second loop, we iterate over the remaining elements from the first
part. We store each value inside . In the third loop, we perform a similar
operation to the second loop. However, here we iterate over the remaining
elements from the second part.
The second and third loops are because after the first loop ends, we
might have remaining elements in one of the parts. Since all of these values are
larger than the added ones, we should add them to the resulting answer.

The complexity of the merge function is , where is the


length of the first part, and is the length of the second one.
Note that the complexity of this function is linear in terms of the length of the
passed parts. However, it’s not linear compared to the full array because we
might call the function to handle a small part of it.

3.3. Merge Sort

Now let’s use the merge function to implement the merge sort iterative approach.
Take a look at the implementation:

3|Page
Firstly, we start from that indicates the size of each part the algorithm handles
at this step.
In each step, we iterate over all parts of size and calculated the beginning and
end of each two adjacent parts. Once we determined both parts, we merged them
using the function defined in algorithm 1.
Note that we handled two special cases. The first one is if reaches the outside of
the array, while the second one is when reaches the outside. The reason for
these cases is that the last part may contain less than elements. Therefore, we
adjust its size so that it doesn’t exceed .
After the merging ends, we copy the elements from into their respective
places in .
Note that in each step, we doubled the length of a single part . The reason is
that we merged two parts of length . So, for the next step, we know that all parts
of the size are now sorted.
Finally, we return the sorted .

The complexity of the iterative approach is , where is the


length of the array. The reason is that, in the first loop, we double the
value in each step. So, this is . Also, in each step, we iterate over each
element inside the array twice and call the function for the complete array in
total. Thus, this is .

4. Example

Let’s take an example to understand the iterative version more clearly. Suppose we
have an array as follows:

Let’s apply the iterative algorithm to .

4|Page
In the first step, we’ll be merging the element with the one. As a result, we’ll
get a new sorted part which contains the first two values. Since the is smaller
than the , we’ll swap both of them.
Similarly, we perform this operation for the and elements. However, in this
case, they’re already sorted, and we keep their order. We continue this operation
for the and values as well.
After these steps, will have its parts of size 2 sorted as follows:

In the second step, the becomes 2. Hence, we have three parts. Each of these
parts contains two elements. Similarly to the previous steps, we have to merge the
first and the second parts. Thus, we get a new part that contains the first four
values. For the third part, we just skip it because it was sorted in the previous step,
and we don’t have a fourth part of merging it with.
Take a look at the result after these steps:

5|Page
In the last step, the becomes 4, and we have two parts. The first one contains
four elements, whereas the second one contains two. We call the merging function
for these parts.
After the merging ends, we get the final sorted answer:

5. Comparison

As usual, recursion reduces the size of the code, and it’s easier to think about and
implement. On the other hand, it takes more memory because it uses
the stack memory which is slow in execution.
For that, we prefer to use the iterative algorithm because of its speed and memory
saving. However, if we don’t care about execution time and memory as if we have a
small array, for example, we can use a recursive version.

6|Page
6. Conclusion

In this tutorial, we explained how to implement the merge sort algorithm using an
iterative approach. Firstly, we discussed the merge sort algorithm using the
recursive version of the algorithm.
Then, we discussed the iterative version of this algorithm. Also, we explained the
reason behind its complexity.
After that, we provided a simple example to clarify the idea well.

2. Sorting Algorithms-Recursive.

I’m going to present pretty much all of the sorting algorithms recursively, so we
should probably talk about recursion. Recursion is a really mind-expanding
technique, once you get the hang of it. It’s also the foundation for what could be
called the “mathematical interpretation” of computer programming, so if you’re a
CSci major, you’ll have to get comfortable with it sooner or later. So let’s look at
some simple algorithms, both the iteratively (using loops) and recursively.

Finding the factorial

The factorial of n is defined as the product n(n−1)(n−2)…(2)(1)n(n−1)(n−2)…(2)(1), i.e.,


the product of all integers up to and including n. It’s easy to write as a loop:
int factorial_iter(int n) {

int r = 1; // Factorial of 0 is 1

for(int i = 1; i <= n; ++i)

r *= i;

return r;

To write this, or any other algorithm, recursively, we have to ask two questions:

 What is the smallest case, the case where I can give the answer right away? This
is called the “base case”. (Sometimes there might be more than one smallest
case, and that’s OK.)

7|Page
 For anything that is not the smallest case, how do I break it down to make it
smaller? This is called the recursive case.

For the factorial, the base case is what happens when n=0n=0: the loop doesn’t run
at all, and 1 is returned. So we can start our recursive version with
int factorial_rec(int n) {

if(n == 0)

return 1;

else

...

To construct the recursive case, we need to look at what happens when n > 0 . In
particular, how can we break n!n! down into some n′!,n′<nn′!,n′<n? The most
common case is n′=n−1n′=n−1.
One way to look at this is to assume that we already have the value of (n−1)!(n−1)!,
and we want to get n!n! from it. That is, assume that factorial_rec(n - 1) will work
and give us the right answer; we just need to construct the factorial of n from it.
How can we do this? n!=n(n−1)!n!=n(n−1)!. So we write our recursive case like this:
int fact(int n) {

if(n == 0)

return 1;

else

return n * fact(n - 1);

Let’s take a minute to walk through the process of computing factorial_rec(3) :

 fact(3) = 3 * fact(2)

 fact(2) = 2 * fact(1)

8|Page
 fact(1) = 1 * fact(0)

 fact(0) = 1 and at this point we can work our way back up, giving the result 3 *
2 * 1 * 1 = 6.

Inductive proof

How do we show that a function does what it is supposed to do? We could test it,
running it thousands or millions of times and verifying that its output is what we
expect, but this requires us to come up with an independent way to define what the
function does (e.g., a different way of computing the factorial), which might itself be
incorrect, and furthermore, repeated testing can only ever give us
a statistical confidence that our algorithm is correct. If we want to be sure, then we
need a logical, or mathematical proof that it is correct. For recursive functions, this
often takes the form of proof by induction. An inductive proof is kind of the
mathematical equivalent to a recursive function. Like a recursive function it has
base case(s) (one base case, in fact, for every base case in the function), and the
base cases are usually easy. It also has inductive case(s) (one for each recursive case
in the function), which are somewhat more tricky, but allow us to do something like
recursion.

Consider the example above. We want to prove that fact(n) = n!n!, where the
definition of n!=n(n−1)(n−2)…(2)(1),0!=1n!=n(n−1)(n−2)…(2)(1),0!=1.
Proof by induction on n (whatever variable we do the recursion on, we say we are
doing “proof by induction” on that variable):

 Base case, n=0n=0 Then fact(0) = 1 = 0!0! and we are done.


 Inductive case: In the inductive case, we are trying to prove that fact(n) = n!n!`
for some n>0n>0. We can break down the left and right sides and see that we
actually have

n⋅fact(n−1)=n!=n(n−1)(n−2)…(2)(1)n⋅fact(n−1)=n!=n(n−1)(n−2)…(2)(1)

Dividing through by nn we get

fact(n−1)=(n−1)(n−2)…(2)(1)=(n−1)!fact(n−1)=(n−1)(n−2)…(2)(1)=(n−1)!

In other words, we have reduced the problem of proving that fact(n) = n!n! to the
problem of proving that fact(n-1) = (n−1)!(n−1)!. That doesn’t seem useful, but as

9|Page
in a recursive function, where we can call the function itself with a smaller
argument, in an inductive proof we can reuse the proof itself as an assumption,
for a smaller nn. We call this assumption the inductive hypothesis and it looks
like this:

Assumefact(n′)=n′!for alln′<nAssumefact(n′)=n′!for alln′<n

If we let n′=n−1n′=n−1 then we have

Assumefact(n−1)=(n−1)!Assumefact(n−1)=(n−1)!

which is exactly what we needed above! Substituting in this for the above we get

(n−1)!=(n−1)!(n−1)!=(n−1)!

and we are done.

Like recursion, the heart of an inductive proof is the act of applying the proof itself
as an assumption about “smaller” values (n′<nn′<n). Technically, there are two kinds
of inductive proofs:
 “Natural number induction” only lets us make the assumption
about n′=n−1n′=n−1. That is, we can only make the assumption about an “input”
that is one smaller than the original.
 “Strong induction” lets us use any n′<nn′<n. You can use strong induction
anywhere where you can use natural number induction, but it isn’t always
required.
The integer exponent calculation

Remember when we worked out the runtime complexity of our


“optimized” O(logn)O(log⁡n) function for finding a bnbn? We can write a recursive
version of that as well. Once again, we have to ask
 What is the base case? In this case, it’s when n=0n=0. In that case, b0=1b0=1, no
matter what f is (there is some debate about 0000).
 What is the recursive case? How do we break down bnbn into bn′bn′? Here,
we’re going to take our queue from our earlier implementation:

bn=(bn/2)2if n is evenbn=(bn/2)2if n is even

bn=b⋅bn−1if n is oddbn=b⋅bn−1if n is odd

This gives us the following definition:

10 | P a g e
float powi(float b, int n) {

if(n == 0)

return 1;

else if(n % 2 == 0) {

// Even

float fp = powi(b, n / 2);

return fp * fp;

else if(n % 2 == 1) // Odd

return f * powi(b, n - 1);

This has the same complexity as the loop-based version, and is arguably simpler.

In this case, if we want to prove that powi(b,n)=bnpowi(b,n)=bn we’ll need strong


induction, because one of the recursive cases shrinks the input by something other
than just -1.
Proof that powi(b,n)=bnpowi(b,n)=bn by strong induction on nn:
 Base case: n=0n=0
Then by looking at the program we get powi(n,0)=1=b0powi(n,0)=1=b0 and we
are done.
 Inductive case: n>0n>0, prove powi(b,n)=bnpowi(b,n)=bn. Here there are
actually two inductive cases, one each for the two recursive cases in the
function. Our inductive hypothesis (assumption) is

powi(b,n′)=bn′for alln′<npowi(b,n′)=bn′for alln′<n

o Case 1, nn is even. Then replacing the call to powi by its return value we
have

powi(b,n/2)2=bnpowi(b,n/2)2=bn

11 | P a g e
powi(b,n/2)2=(bn/2)2powi(b,n/2)2=(bn/2)2

Taking the square root of both sides:

powi(b,n/2)=bn/2powi(b,n/2)=bn/2

at which point we can apply the IH, with n′=n/2n′=n/2, giving

bn/2=bn/2bn/2=bn/2

o Case 2, nn is odd. Then expanding powi we get

b⋅powi(b,n−1)=b⋅bn−1b⋅powi(b,n−1)=b⋅bn−1

powi(b,n−1)=bn−1( divide by b)powi(b,n−1)=bn−1( divide by b)

bn−1=bn−1by IH, (n′=n−1)bn−1=bn−1by IH, (n′=n−1)

And the proof is complete

Mutual recursion

Mutual recursion is when we define several recursive functions in terms of each


other. For example, consider the following definition of even and odd:

 A natural number n is even iff n−1n−1 is odd.


 A natural number n is odd iff n−1n−1 is even.
 11 is odd, 00 is even. (These are our base cases.)
We can then define two functions (predicates) that recursively refer to each other:

bool is_even(int n) {

if(n == 0)

return true;

else if(n == 1)

return false;

else

12 | P a g e
return is_odd(n - 1);

bool is_odd(int n) {

if(n == 0)

return false;

else if(n == 1)

return true;

else

return is_even(n - 1);

If we track out the processing of determining is_even(4) , we’ll see that it bounces
back and forth between is_even and is_odd .

Binary search

We did a binary search iteratively, but we can do it recursively as well:

 There are two base cases: when we find the item, or when the search space is
reduced to 0 (indicating that the item is not found).

 The recursive case compares the value of the target to the value at the current
midpoint, and then reduces the size of the search space (by recursively
searching either the left or right sides).

This looks like

template<typename T>

int binary_search(const vector<T>& data,

int low = 0,

13 | P a g e
int high = data.size()-1,

const T& target) {

if(low > high)

return -1;

int mid = low + (high - low) / 2; // Why did I do this?

if(data.at(mid) == target)

return mid;

else if(data.at(mid) < target) // Search right

return binary_search(data, mid+1, high, target);

else if(data.at(mid) > target) // Search left

return binary_search(data, low, mid-1, target);

Other examples: Counting the number of copies in a vector. For any vector-style
recursion, we need to keep track of our “starting place” within the vector. This is
because we can’t make the vector itself smaller, so we have to put a marker into it
showing where we are starting. We can do this in two ways, with an int
start parameter, or by using iterators.

template<typename T>

int count(vector<T> data, int start, T target) {

if(start == data.size())

return 0;

14 | P a g e
else

return (data.at(start) == target) +

count(data, start + 1, target);

With iterators:

template<typename T, typename It>

int count(It start, It finish, T target) {

if(start == finish)

return 0;

else

return (*start == target) +

count(start + 1, finish, target);

Iterators are kind of like pointers.

Sorting algorithms

A sorting algorithm is a function that takes a sequence of items and somehow


constructs a permutation of them, such that they are ordered in some fashion.
Usually, we want things to be ordered according to the normal comparison
operators, so that if a < b then a comes before b in the final permutation. Still,
there are a few things we have to make sure we get right:

 Obviously we can’t lose any elements through the process.

 There may be duplicates in the original, if so, there should be an equal number
of duplicates in the output.

 For convenience, we allow sorting an empty sequence (which, when sorted,


results in yet another empty sequence)

15 | P a g e
There are some terms associated with sorting that it’s important to be aware of:

 Stability – when the input sequence has element which compare as equal but
which are distinct (e.g., employees with identical names but otherwise different
people) the question arises as to whether, in the output sequence, they occur in
the same order as in the original. E.g., if employee “John Smith” #123 came
before “John Smith” #234 in the original sequence, then we say that a sort
is stable if #123 is before #234 in the result.

Stability is important when sorting a sequence on multiple criteria. E.g., if we


sort based on first name, then based on last name, an unstable sort won’t give
us the result we want: the first name entries will be all mixed up.

 Adaptability – some input sequences are already partially (or completely) sorted;
an adaptable sorting algorithm will run faster (in big-O terms) on partially sorted
inputs. The optimal runtime for a completely sorted input is O(n)O(n), the time
that it takes to verify that the input is already sorted.
 In-Place – an in-place sorting algorithm is one that needs no extra space (i.e., it’s
space complexity is O(1)O(1)) to sort. Some algorithms cannot be used in place,
and have to construct a separate output sequence of the same size as the
input, to write the results into.
 Online – some datasets are too large to fit into memory at all. An online sorting
algorithm is one that requires all its data to be accessible (in memory). Offline
sorting algorithms can sort data which is partially in memory, partially on disk
or other “slow” storage, without affecting their time complexity.

Selection sort

We already looked at selection sorting, so let’s look at it again:

 To selection sort a list of items, we first find the smallest item in the entire list,
and put it at the beginning.

 Then we find the smallest item in everything after the first item, and put it
second.

 Continue until there’s nothing left unsorted.

Effectively, selection sort splits the list into the sorted region at the beginning, and
the unsorted region at the end. The sorted region grows, while the unsorted region
shrinks.

16 | P a g e
Selection sort is not stable.

Iteratively, this looks like this:

template<typename T>

void selection_sort(vector<T> data) {

for(auto it = begin(data); it != end(data)-1; ++it) {

// Find smallest

auto smallest = it;

for(auto jt = it+1; jt != end(data); ++jt)

if(*jt < *smallest)

smallest = jt;

// Swap it into place

swap(it, smallest);

Let’s trace through this on a small example to get a feel for how it works.

How can we implement this recursively? Instead of passing around the


actual vector , we’re just going to pass around the iterators to
the beginning and end of the vector. Why we do this will become obvious, shortly:

template<typename Iterator>

void selection_sort(Iterator first, Iterator last) {

...

17 | P a g e
}

Let’s analyze the recursion:

 The base case is when last == first + 1 . That means there’s only 1 element, and
a 1-element list is always sorted.

 The recursive case is when last - first > 1 . In that case, we recursively break it
down by

o Finding the minimum of the region, and placing it at the beginning.


o Recursively selection-sorting first+1 through last (because the element
at first is now in the correct place).

template<typename Iterator>

void selection_sort(Iterator first, Iterator last) {

if(last - first == 1)

return; // 1 element, nothing to sort

else {

// Find minimum

Iterator smallest = first;

for(Iterator it = first; it != last; ++it)

if(*it < *smallest)

smallest = it;

// Swap into place

swap(*smallest, *first);

// Recursively sort the remainder

18 | P a g e
selection_sort(first + 1, last);

Let’s draw the recursion tree for this. We won’t trace through the loop, we’ll just
assume (for now) that it works correctly.

DIAGRAM

Sorting algorithm analysis

Besides analyzing the general best/worst-case big-O runtime of a sorting algorithm,


it’s common to also analyze two other runtime features:

 The number of comparisons between elements. This only counts comparisons


between the objects being sorted, not the comparison of (e.g.) a loop counter
variable.

 The number of swaps between elements.

Analyzing the number of comparisons and swaps is useful because these


operations lie at the heart of any sorting algorithm: we cannot know whether
elements are out of order until we compare them (and, if the elements are
complex, comparing them may take a non-trivial amount of time), and we cannot
put them in the right order without moving them around – i.e., swapping them.

3. Searching Algorithm

Searching Algorithms are designed to check for an element or retrieve an element


from any data structure where it is stored. Based on the type of search operation,
these algorithms are generally classified into two categories:
1. Sequential Search: In this, the list or array is traversed sequentially and every
element is checked. For example: Linear Search.
2. Interval Search: These algorithms are specifically designed for searching in
sorted data-structures. These type of searching algorithms are much more
efficient than Linear Search as they repeatedly target the center of the search
structure and divide the search space in half. For Example: Binary Search.
Linear Search to find the element “20” in a given list of numbers

19 | P a g e
Binary Search to find the element “23” in a given list of numbers

4. Implementation of Stack using Array.

Stack is a linear data structure that follows a particular order in which the operations
are performed. The order may be LIFO(Last In First Out) or FILO(First In Last Out).
Mainly the following three basic operations are performed in the stack:
 Push: Adds an item in the stack. If the stack is full, then it is said to be an
Overflow condition.

20 | P a g e
 Pop: Removes an item from the stack. The items are popped in the reversed
order in which they are pushed. If the stack is empty, then it is said to be an
Underflow condition.
 Peek or Top: Returns the top element of the stack.
 isEmpty: Returns true if the stack is empty, else false.

How to understand a stack practically?


There are many real-life examples of a stack. Consider the simple example of plates
stacked over one another in a canteen. The plate which is at the top is the first one
to be removed, i.e. the plate which has been placed at the bottommost position
remains in the stack for the longest period of time. So, it can be simply seen to follow
the LIFO/FILO order.
Time Complexities of operations on stack:
push(), pop(), isEmpty() and peek() all take O(1) time. We do not run any loop in any
of these operations.

Applications of stack:
 Balancing of symbols
 Infix to Postfix /Prefix conversion
 Redo-undo features at many places like editors, photoshop.
 Forward and backward feature in web browsers
 Used in many algorithms like Tower of Hanoi, tree traversals, stock span
problem, histogram problem.

21 | P a g e
 Backtracking is one of the algorithm designing techniques. Some examples of
backtracking are the Knight-Tour problem, N-Queen problem, find your way
through a maze, and game-like chess or checkers in all these problems we dive
into someway if that way is not efficient we come back to the previous state and
go into some another path. To get back from a current state we need to store
the previous state for that purpose we need a stack.
 In Graph Algorithms like Topological Sorting and Strongly Connected
Components
 In Memory management, any modern computer uses a stack as the primary
management for a running purpose. Each program that is running in a
computer system has its own memory allocations
 String reversal is also another application of stack. Here one by one each
character gets inserted into the stack. So the first character of the string is on
the bottom of the stack and the last element of a string is on the top of the
stack. After Performing the pop operations on the stack we get a string in
reverse order.
Implementation:
There are two ways to implement a stack:
 Using array
 Using linked list

Implementing Stack using Arrays

/* C++ program to implement basic stack

operations */

#include <bits/stdc++.h>

using namespace std;

22 | P a g e
#define MAX 1000

class Stack {

int top;

public:

int a[MAX]; // Maximum size of Stack

Stack() { top = -1; }

bool push(int x);

int pop();

int peek();

bool isEmpty();

};

bool Stack::push(int x)

23 | P a g e
{

if (top >= (MAX - 1)) {

cout << "Stack Overflow";

return false;

else {

a[++top] = x;

cout << x << " pushed into stack\n";

return true;

int Stack::pop()

if (top < 0) {

cout << "Stack Underflow";

return 0;

24 | P a g e
}

else {

int x = a[top--];

return x;

int Stack::peek()

if (top < 0) {

cout << "Stack is Empty";

return 0;

else {

int x = a[top];

return x;

25 | P a g e
bool Stack::isEmpty()

return (top < 0);

// Driver program to test above functions

int main()

class Stack s;

s.push(10);

s.push(20);

s.push(30);

cout << s.pop() << " Popped from stack\n";

//print all elements in stack :

cout<<"Elements present in stack : ";

while(!s.isEmpty())

26 | P a g e
{

// print top element in stack

cout<<s.peek()<<" ";

// remove top element in stack

s.pop();

return 0;

5. Implementation of Queue using Array.

In queue, insertion and deletion happen at the opposite ends, so implementation is


not as simple as stack.
To implement a queue using array, create an array arr of size n and take two
variables front and rear both of which will be initialized to 0 which means the queue
is currently empty. Element rear is the index upto which the elements are stored in
the array and front is the index of the first element of the array. Now, some of the
implementation of queue operations are as follows:
1. Enqueue: Addition of an element to the queue. Adding an element will be
performed after checking whether the queue is full or not. If rear < n which
indicates that the array is not full then store the element at arr[rear] and
increment rear by 1 but if rear == n then it is said to be an Overflow condition as
the array is full.
2. Dequeue: Removal of an element from the queue. An element can only be
deleted when there is at least an element to delete i.e. rear > 0. Now, element
at arr[front] can be deleted but all the remaining elements have to shifted to the

27 | P a g e
left by one position in order for the dequeue operation to delete the second
element from the left on another dequeue operation.
3. Front: Get the front element from the queue i.e. arr[front] if queue is not empty.
4. Display: Print all element of the queue. If the queue is non-empty, traverse and
print all the elements from index front to rear.

Below is the implementation of a queue using an array:

 C++
 Java
 Python3
 C#

28 | P a g e
 Javascript

// C++ program to implement a queue using an array

#include <bits/stdc++.h>

using namespace std;

struct Queue {

int front, rear, capacity;

int* queue;

Queue(int c)

front = rear = 0;

capacity = c;

queue = new int;

~Queue() { delete[] queue; }

29 | P a g e
// function to insert an element

// at the rear of the queue

void queueEnqueue(int data)

// check queue is full or not

if (capacity == rear) {

printf("\nQueue is full\n");

return;

// insert element at the rear

else {

queue[rear] = data;

rear++;

return;

30 | P a g e
// function to delete an element

// from the front of the queue

void queueDequeue()

// if queue is empty

if (front == rear) {

printf("\nQueue is empty\n");

return;

// shift all the elements from index 2 till rear

// to the left by one

else {

for (int i = 0; i < rear - 1; i++) {

queue[i] = queue[i + 1];

31 | P a g e
// decrement rear

rear--;

return;

// print queue elements

void queueDisplay()

int i;

if (front == rear) {

printf("\nQueue is Empty\n");

return;

// traverse front to rear and print elements

32 | P a g e
for (i = front; i < rear; i++) {

printf(" %d <-- ", queue[i]);

return;

// print front of queue

void queueFront()

if (front == rear) {

printf("\nQueue is Empty\n");

return;

printf("\nFront Element is: %d", queue[front]);

return;

};

33 | P a g e
// Driver code

int main(void)

// Create a queue of capacity 4

Queue q(4);

// print Queue elements

q.queueDisplay();

// inserting elements in the queue

q.queueEnqueue(20);

q.queueEnqueue(30);

q.queueEnqueue(40);

q.queueEnqueue(50);

// print Queue elements

34 | P a g e
q.queueDisplay();

// insert element in the queue

q.queueEnqueue(60);

// print Queue elements

q.queueDisplay();

q.queueDequeue();

q.queueDequeue();

printf("\n\nafter two node deletion\n\n");

// print Queue elements

q.queueDisplay();

// print front of the queue

35 | P a g e
q.queueFront();

return 0;

6. Implementation of Circular Queue using Array,

6 6. Implementation of Circular Queue using Array,

A circular queue is the extended version of a regular queue where the last
element is connected to the first element. Thus forming a circle-like structure.

Circular queue representation

The circular queue solves the major limitation of the normal queue. In a normal
queue, after a bit of insertion and deletion, there will be non-usable empty space.

36 | P a g e
Limitation of the regular Queue

Here, indexes 0 and 1 can only be used after resetting the queue (deletion of all
elements). This reduces the actual size of the queue.

How Circular Queue Works


Circular Queue works by the process of circular increment i.e. when we try to
increment the pointer and we reach the end of the queue, we start from the
beginning of the queue.

Here, the circular increment is performed by modulo division with the queue size.
That is,

if REAR + 1 == 5 (overflow!), REAR = (REAR + 1)%5 = 0 (start of queue)

Circular Queue Operations

37 | P a g e
The circular queue work as follows:

 two pointers FRONT and REAR

 FRONT track the first element of the queue


 REAR track the last elements of the queue
 initially, set value of FRONT and REAR to -1
1. Enqueue Operation

 check if the queue is full

 for the first element, set value of FRONT to 0


 circularly increase the REAR index by 1 (i.e. if the rear reaches the end, next it
would be at the start of the queue)
 add the new element in the position pointed to by REAR

2. Dequeue Operation

 check if the queue is empty

 return the value pointed by FRONT

 circularly increase the FRONT index by 1


 for the last element, reset the values of FRONT and REAR to -1

Ad

However, the check for full queue has a new additional case:

 Case 1: FRONT = 0 && REAR == SIZE - 1

 Case 2: FRONT = REAR + 1

The second case happens when REAR starts from 0 due to circular increment and
when its value is just 1 less than FRONT , the queue is full.

38 | P a g e
39 | P a g e
Circular Queue Implementations in Python, Java, C, and
C++
The most common queue implementation is using arrays, but it can also be
implemented using lists.

Pytho

# Circular Queue implementation in Python

class MyCircularQueue():

def __init__(self, k):


self.k = k
self.queue = [None] * k
self.head = self.tail = -1

# Insert an element into the circular queue


def enqueue(self, data):

if ((self.tail + 1) % self.k == self.head):


print("The circular queue is full\n")

elif (self.head == -1):


self.head = 0
self.tail = 0
self.queue[self.tail] = data
else:
self.tail = (self.tail + 1) % self.k
self.queue[self.tail] = data

# Delete an element from the circular queue


def dequeue(self):
if (self.head == -1):

40 | P a g e
print("The circular queue is empty\n")

elif (self.head == self.tail):


temp = self.queue[self.head]
self.head = -1
self.tail = -1
return temp
else:
temp = self.queue[self.head]
self.head = (self.head + 1) % self.k
return temp

def printCQueue(self):
if(self.head == -1):
print("No element in the circular queue")

elif (self.tail >= self.head):


for i in range(self.head, self.tail + 1):
print(self.queue[i], end=" ")
print()
else:
for i in range(self.head, self.k):
print(self.queue[i], end=" ")
for i in range(0, self.tail + 1):
print(self.queue[i], end=" ")
print()

# Your MyCircularQueue object will be instantiated and called as such:


obj = MyCircularQueue(5)
obj.enqueue(1)
obj.enqueue(2)
obj.enqueue(3)
obj.enqueue(4)
obj.enqueue(5)
print("Initial queue")
obj.printCQueue()

obj.dequeue()
print("After removing an element from the queue")
obj.printCQueue()

41 | P a g e
7. Implementation of Stack using Linked List.

The major problem with the stack implemented using an array is, it works only for a

fixed number of data values. That means the amount of data must be specified at

the beginning of the implementation itself. Stack implemented using an array is not

suitable, when we don't know the size of data which we are going to use. A stack data

structure can be implemented by using a linked list data structure. The stack

implemented using linked list can work for an unlimited number of values. That

means, stack implemented using linked list works for the variable size of data. So,

there is no need to fix the size at the beginning of the implementation. The Stack

implemented using linked list can organize as many data values as we want.

In linked list implementation of a stack, every new element is inserted as 'top'

element. That means every newly inserted element is pointed by 'top'. Whenever we
want to remove an element from the stack, simply remove the node which is pointed

by 'top' by moving 'top' to its previous node in the list. The next field of the first

element must be always NULL.

42 | P a g e
Example

In the above example, the last inserted node is 99 and the first inserted node is 25.

The order of elements inserted is 25, 32,50 and 99.

Stack Operations using Linked List

To implement a stack using a linked list, we need to set the following things before

implementing actual operations.

 Step 1 - Include all the header files which are used in the program. And

declare all the user defined functions.

 Step 2 - Define a 'Node' structure with two members data and next.

 Step 3 - Define a Node pointer 'top' and set it to NULL.

 Step 4 - Implement the main method by displaying Menu with list of

operations and make suitable function calls in the main method.

push(value) - Inserting an element into the Stack

We can use the following steps to insert a new node into the stack...

43 | P a g e
 Step 1 - Create a newNode with given value.

 Step 2 - Check whether stack is Empty (top == NULL)

 Step 3 - If it is Empty, then set newNode → next = NULL.

 Step 4 - If it is Not Empty, then set newNode → next = top.

 Step 5 - Finally, set top = newNode.

pop() - Deleting an Element from a Stack

We can use the following steps to delete a node from the stack...

 Step 1 - Check whether stack is Empty (top == NULL).

 Step 2 - If it is Empty, then display "Stack is Empty!!! Deletion is not

possible!!!" and terminate the function

 Step 3 - If it is Not Empty, then define a Node pointer 'temp' and set it to

'top'.

 Step 4 - Then set 'top = top → next'.

 Step 5 - Finally, delete 'temp'. (free(temp)).

display() - Displaying stack of elements

We can use the following steps to display the elements (nodes) of a stack...

 Step 1 - Check whether stack is Empty (top == NULL).

 Step 2 - If it is Empty, then display 'Stack is Empty!!!' and terminate the

function.

 Step 3 - If it is Not Empty, then define a Node pointer 'temp' and initialize

with top.

 Step 4 - Display 'temp → data --->' and move it to the next node. Repeat the

same until temp reaches to the first node in the stack. (temp →
next != NULL).

44 | P a g e
 Step 5 - Finally! Display 'temp → data ---> NULL'.

Implementation of Stack using Linked List | C Programming

#include<stdio.h>
#include<conio.h>

struct Node
{
int data;
struct Node *next;
}*top = NULL;

void push(int);
void pop();
void display();

void main()
{
int choice, value;
clrscr();
printf("\n:: Stack using Linked List ::\n");
while(1){
printf("\n****** MENU ******\n");
printf("1. Push\n2. Pop\n3. Display\n4. Exit\n");
printf("Enter your choice: ");
scanf("%d",&choice);
switch(choice){

45 | P a g e
case 1: printf("Enter the value to be insert: ");
scanf("%d", &value);
push(value);
break;
case 2: pop(); break;
case 3: display(); break;
case 4: exit(0);
default: printf("\nWrong selection!!! Please try again!!!\n");
}
}
}
void push(int value)
{
struct Node *newNode;
newNode = (struct Node*)malloc(sizeof(struct Node));
newNode->data = value;
if(top == NULL)
newNode->next = NULL;
else
newNode->next = top;
top = newNode;
printf("\nInsertion is Success!!!\n");
}
void pop()
{
if(top == NULL)
printf("\nStack is Empty!!!\n");

46 | P a g e
else{
struct Node *temp = top;
printf("\nDeleted element: %d", temp->data);
top = temp->next;
free(temp);
}
}
void display()
{
if(top == NULL)
printf("\nStack is Empty!!!\n");
else{
struct Node *temp = top;
while(temp->next != NULL){
printf("%d--->",temp->data);
temp = temp -> next;
}
printf("%d--->NULL",temp->data);
}
}

8. Implementation of Queue using Linked List.

In the previous post, we introduced Queue and discussed array implementation. In


this post, linked list implementation is discussed. The following two main
operations must be implemented efficiently.

47 | P a g e
In a Queue data structure, we maintain two pointers, front and rear.
The front points the first item of queue and rear points to last item.
enQueue() This operation adds a new node after rear and moves rear to the next
node.
deQueue() This operation removes the front node and moves front to the next
node.

#include <bits/stdc++.h>

using namespace std;

struct QNode {

int data;

QNode* next;

QNode(int d)

data = d;

next = NULL;

};

struct Queue {

48 | P a g e
QNode *front, *rear;

Queue()

front = rear = NULL;

void enQueue(int x)

// Create a new LL node

QNode* temp = new QNode(x);

// If queue is empty, then

// new node is front and rear both

if (rear == NULL) {

front = rear = temp;

return;

49 | P a g e
// Add the new node at

// the end of queue and change rear

rear->next = temp;

rear = temp;

// Function to remove

// a key from given queue q

void deQueue()

// If queue is empty, return NULL.

if (front == NULL)

return;

// Store previous front and

// move front one node ahead

QNode* temp = front;

front = front->next;

50 | P a g e
// If front becomes NULL, then

// change rear also as NULL

if (front == NULL)

rear = NULL;

delete (temp);

};

// Driven Program

int main()

Queue q;

q.enQueue(10);

q.enQueue(20);

q.deQueue();

q.deQueue();

q.enQueue(30);

51 | P a g e
q.enQueue(40);

q.enQueue(50);

q.deQueue();

cout << "Queue Front : " << (q.front)->data << endl;

cout << "Queue Rear : " << (q.rear)->data;

// This code is contributed by rathbhupendra

Output:
Queue Front : 40
Queue Rear : 50

9. Implementation of Circular Queue using Linked List..

Prerequisite – Queues
Circular Queue is a linear data structure in which the operations are performed
based on FIFO (First In First Out) principle and the last position is connected back to
the first position to make a circle. It is also called ‘Ring Buffer’.

52 | P a g e
In a normal Queue, we can insert elements until queue becomes full. But once
queue becomes full, we can not insert the next element even if there is a space in
front of queue.

Operations on Circular Queue:

 Front: Get the front item from queue.


 Rear: Get the last item from queue.
 enQueue(value) This function is used to insert an element into the circular
queue. In a circular queue, the new element is always inserted at Rear position.
1. Check whether queue is Full – Check ((rear == SIZE-1 && front == 0) || (rear
== front-1)).
2. If it is full then display Queue is full. If queue is not full then, check if (rear ==
SIZE – 1 && front != 0) if it is true then set rear=0 and insert element.
 deQueue() This function is used to delete an element from the circular queue.
In a circular queue, the element is always deleted from front position.
1. Check whether queue is Empty means check (front==-1).
2. If it is empty then display Queue is empty. If queue is not empty then step 3
3. Check if (front==rear) if it is true then set front=rear= -1 else check if
(front==size-1), if it is true then set front=0 and return the element.

53 | P a g e
// C or C++ program for insertion and

// deletion in Circular Queue

#include<bits/stdc++.h>

using namespace std;

class Queue

// Initialize front and rear

int rear, front;

// Circular Queue

int size;

int *arr;

public:

Queue(int s)

54 | P a g e
front = rear = -1;

size = s;

arr = new int[s];

void enQueue(int value);

int deQueue();

void displayQueue();

};

/* Function to create Circular queue */

void Queue::enQueue(int value)

if ((front == 0 && rear == size-1) ||

(rear == (front-1)%(size-1)))

55 | P a g e
printf("\nQueue is Full");

return;

else if (front == -1) /* Insert First Element */

front = rear = 0;

arr[rear] = value;

else if (rear == size-1 && front != 0)

rear = 0;

arr[rear] = value;

else

56 | P a g e
{

rear++;

arr[rear] = value;

// Function to delete element from Circular Queue

int Queue::deQueue()

if (front == -1)

printf("\nQueue is Empty");

return INT_MIN;

int data = arr[front];

arr[front] = -1;

57 | P a g e
if (front == rear)

front = -1;

rear = -1;

else if (front == size-1)

front = 0;

else

front++;

return data;

// Function displaying the elements

// of Circular Queue

void Queue::displayQueue()

58 | P a g e
if (front == -1)

printf("\nQueue is Empty");

return;

printf("\nElements in Circular Queue are: ");

if (rear >= front)

for (int i = front; i <= rear; i++)

printf("%d ",arr[i]);

else

for (int i = front; i < size; i++)

printf("%d ", arr[i]);

for (int i = 0; i <= rear; i++)

59 | P a g e
printf("%d ", arr[i]);

/* Driver of the program */

int main()

Queue q(5);

// Inserting elements in Circular Queue

q.enQueue(14);

q.enQueue(22);

q.enQueue(13);

q.enQueue(-6);

// Display elements present in Circular Queue

q.displayQueue();

60 | P a g e
// Deleting elements from Circular Queue

printf("\nDeleted value = %d", q.deQueue());

printf("\nDeleted value = %d", q.deQueue());

q.displayQueue();

q.enQueue(9);

q.enQueue(20);

q.enQueue(5);

q.displayQueue();

q.enQueue(20);

return 0;

61 | P a g e
10. Implementation of Tree Structures, Binary Tree, Tree Traversal, Binary
Search Tree, Insertion and Deletion

We read the linear data structures like an array, linked list, stack and queue in which
all the elements are arranged in a sequential manner. The different data structures
are used for different kinds of data.

Some factors are considered for choosing the data structure:

o What type of data needs to be stored?: It might be a possibility that a certain


data structure can be the best fit for some kind of data.
o Cost of operations: If we want to minimize the cost for the operations for the
most frequently performed operations. For example, we have a simple list on
which we have to perform the search operation; then, we can create an array
in which elements are stored in sorted order to perform the binary search.
The binary search works very fast for the simple list as it divides the search
space into half.
o Memory usage: Sometimes, we want a data structure that utilizes less
memory.

A tree is also one of the data structures that represent hierarchical data. Suppose we
want to show the employees and their positions in the hierarchical form then it can
be represented as shown below:

62 | P a g e
The above tree shows the organization hierarchy of some company. In the above
structure, john is the CEO of the company, and John has two direct reports named
as Steve and Rohan. Steve has three direct reports named Lee, Bob,
Ella where Steve is a manager. Bob has two direct reports
named Sal and Emma. Emma has two direct reports named Tom and Raj. Tom has
one direct report named Bill. This particular logical structure is known as a Tree. Its
structure is similar to the real tree, so it is named a Tree. In this structure, the root is
at the top, and its branches are moving in a downward direction. Therefore, we can
say that the Tree data structure is an efficient way of storing the data in a hierarchical
way.

38.3M

632

Hello Java Program for Beginners

Let's understand some key points of the Tree data structure.

63 | P a g e
o A tree data structure is defined as a collection of objects or entities known as
nodes that are linked together to represent or simulate hierarchy.
o A tree data structure is a non-linear data structure because it does not store
in a sequential manner. It is a hierarchical structure as elements in a Tree are
arranged in multiple levels.
o In the Tree data structure, the topmost node is known as a root node. Each
node contains some data, and data can be of any type. In the above tree
structure, the node contains the name of the employee, so the type of data
would be a string.
o Each node contains some data and the link or reference of other nodes that
can be called children.

Some basic terms used in Tree data structure.

Let's consider the tree structure, which is shown below:

64 | P a g e
In the above structure, each node is labeled with some number. Each arrow shown
in the above figure is known as a link between the two nodes.

o Root: The root node is the topmost node in the tree hierarchy. In other words,
the root node is the one that doesn't have any parent. In the above structure,
node numbered 1 is the root node of the tree. If a node is directly linked to
some other node, it would be called a parent-child relationship.
o Child node: If the node is a descendant of any node, then the node is known
as a child node.
o Parent: If the node contains any sub-node, then that node is said to be the
parent of that sub-node.
o Sibling: The nodes that have the same parent are known as siblings.
o Leaf Node:- The node of the tree, which doesn't have any child node, is called
a leaf node. A leaf node is the bottom-most node of the tree. There can be any
number of leaf nodes present in a general tree. Leaf nodes can also be called
external nodes.
o Internal nodes: A node has atleast one child node known as an internal
o Ancestor node:- An ancestor of a node is any predecessor node on a path
from the root to that node. The root node doesn't have any ancestors. In the
tree shown in the above image, nodes 1, 2, and 5 are the ancestors of node
10.
o Descendant: The immediate successor of the given node is known as a
descendant of a node. In the above figure, 10 is the descendant of node 5.

Properties of Tree data structure

o Recursive data structure: The tree is also known as a recursive data


structure. A tree can be defined as recursively because the distinguished node
in a tree data structure is known as a root node. The root node of the tree
contains a link to all the roots of its subtrees. The left subtree is shown in the
yellow color in the below figure, and the right subtree is shown in the red color.
The left subtree can be further split into subtrees shown in three different
colors. Recursion means reducing something in a self-similar manner. So, this
recursive property of the tree data structure is implemented in various

65 | P a g e
applications.

o Number of edges: If there are n nodes, then there would n-1 edges. Each
arrow in the structure represents the link or path. Each node, except the root
node, will have atleast one incoming link known as an edge. There would be
one link for the parent-child relationship.
o Depth of node x: The depth of node x can be defined as the length of the path
from the root to the node x. One edge contributes one-unit length in the path.
So, the depth of node x can also be defined as the number of edges between
the root node and the node x. The root node has 0 depth.
o Height of node x: The height of node x can be defined as the longest path
from the node x to the leaf node.

Based on the properties of the Tree data structure, trees are classified into various
categories.

Implementation of Tree

The tree data structure can be created by creating the nodes dynamically with the
help of the pointers. The tree in the memory can be represented as shown below:

66 | P a g e
The above figure shows the representation of the tree data structure in the memory.
In the above structure, the node contains three fields. The second field stores the
data; the first field stores the address of the left child, and the third field stores the
address of the right child.

In programming, the structure of a node can be defined as:

1. struct node
2. {
3. int data;
4. struct node *left;
5. struct node *right;
6. }

The above structure can only be defined for the binary trees because the binary tree
can have utmost two children, and generic trees can have more than two children.
The structure of the node for generic trees would be different as compared to the
binary tree.

Applications of trees

67 | P a g e
The following are the applications of trees:

o Storing naturally hierarchical data: Trees are used to store the data in the
hierarchical structure. For example, the file system. The file system stored on
the disc drive, the file and folder are in the form of the naturally hierarchical
data and stored in the form of trees.
o Organize data: It is used to organize data for efficient insertion, deletion and
searching. For example, a binary tree has a logN time for searching an
element.
o Trie: It is a special kind of tree that is used to store the dictionary. It is a fast
and efficient way for dynamic spell checking.
o Heap: It is also a tree data structure implemented using arrays. It is used to
implement priority queues.
o B-Tree and B+Tree: B-Tree and B+Tree are the tree data structures used to
implement indexing in databases.
o Routing table: The tree data structure is also used to store the data in routing
tables in the routers.

Types of Tree data structure

The following are the types of a tree data structure:

o General tree: The general tree is one of the types of tree data structure. In
the general tree, a node can have either 0 or maximum n number of nodes.
There is no restriction imposed on the degree of the node (the number of
nodes that a node can contain). The topmost node in a general tree is known
as a root node. The children of the parent node are known as subtrees.

68 | P a g e
There can be n number of subtrees in a general tree. In the general tree, the
subtrees are unordered as the nodes in the subtree cannot be ordered.
Every non-empty tree has a downward edge, and these edges are connected
to the nodes known as child nodes. The root node is labeled with level 0. The
nodes that have the same parent are known as siblings.
o Binary tree

: Here, binary name itself suggests two numbers, i.e., 0 and 1. In a binary tree,
each node in a tree can have utmost two child nodes. Here, utmost means
whether the node has 0 nodes, 1 node or 2 nodes.

69 | P a g e
To know more about the binary tree, click on the link given below:

o Binary Search tree

: Binary search tree is a non-linear data structure in which one node is


connected to n number of nodes. It is a node-based data structure. A node can
be represented in a binary search tree with three fields, i.e., data part, left-
child, and right-child. A node can be connected to the utmost two child nodes
in a binary search tree, so the node contains two pointers (left child and right
child pointer).
Every node in the left subtree must contain a value less than the value of the
root node, and the value of each node in the right subtree must be bigger than
the value of the root node.

70 | P a g e
A node can be created with the help of a user-defined data type known as struct, as
shown below:

1. struct node
2. {
3. int data;
4. struct node *left;
5. struct node *right;
6. }

The above is the node structure with three fields: data field, the second field is the
left pointer of the node type, and the third field is the right pointer of the node type.

Tree Traversals (Inorder, Preorder and Postorde

Unlike linear data structures (Array, Linked List, Queues, Stacks, etc) which have
only one logical way to traverse them, trees can be traversed in different ways.
Following are the generally used ways for traversing trees.

71 | P a g e
Depth First Traversals:
(a) Inorder (Left, Root, Right) : 4 2 5 1 3
(b) Preorder (Root, Left, Right) : 1 2 4 5 3
(c) Postorder (Left, Right, Root) : 4 5 2 3 1
Breadth-First or Level Order Traversal: 1 2 3 4 5
Please see this post for Breadth-First Traversal.

Inorder Traversal (Practice):


Algorithm Inorder(tree)
1. Traverse the left subtree, i.e., call Inorder(left-subtree)
2. Visit the root.
3. Traverse the right subtree, i.e., call Inorder(right-subtree)
Uses of Inorder
In the case of binary search trees (BST), Inorder traversal gives nodes in non-
decreasing order. To get nodes of BST in non-increasing order, a variation of
Inorder traversal where Inorder traversal s reversed can be used.
Example: In order traversal for the above-given figure is 4 2 5 1 3.
Preorder Traversal (Practice):
Algorithm Preorder(tree)
1. Visit the root.
2. Traverse the left subtree, i.e., call Preorder(left-subtree)
3. Traverse the right subtree, i.e., call Preorder(right-subtree)
Uses of Preorder
Preorder traversal is used to create a copy of the tree. Preorder traversal is also
used to get prefix expression on an expression tree. Please
see http://en.wikipedia.org/wiki/Polish_notation know why prefix expressions are
useful.
Example: Preorder traversal for the above-given figure is 1 2 4 5 3.
Postorder Traversal (Practice):
Algorithm Postorder(tree)
1. Traverse the left subtree, i.e., call Postorder(left-subtree)
2. Traverse the right subtree, i.e., call Postorder(right-subtree)

72 | P a g e
3. Visit the root.
Uses of Postorder
Postorder traversal is used to delete the tree. Please see the question for the
deletion of a tree for details. Postorder traversal is also useful to get the postfix
expression of an expression tree. Please
see http://en.wikipedia.org/wiki/Reverse_Polish_notation for the usage of postfix
expression.
Example: Postorder traversal for the above-given figure is 4 5 2 3 1.

// C++ program for different tree traversals

#include <iostream>

using namespace std;

/* A binary tree node has data, pointer to left child

and a pointer to right child */

struct Node {

int data;

struct Node *left, *right;

};

//Utility function to create a new tree node

73 | P a g e
Node* newNode(int data)

Node* temp = new Node;

temp->data = data;

temp->left = temp->right = NULL;

return temp;

/* Given a binary tree, print its nodes according to the

"bottom-up" postorder traversal. */

void printPostorder(struct Node* node)

if (node == NULL)

return;

// first recur on left subtree

printPostorder(node->left);

74 | P a g e
// then recur on right subtree

printPostorder(node->right);

// now deal with the node

cout << node->data << " ";

/* Given a binary tree, print its nodes in inorder*/

void printInorder(struct Node* node)

if (node == NULL)

return;

/* first recur on left child */

printInorder(node->left);

75 | P a g e
/* then print the data of node */

cout << node->data << " ";

/* now recur on right child */

printInorder(node->right);

/* Given a binary tree, print its nodes in preorder*/

void printPreorder(struct Node* node)

if (node == NULL)

return;

/* first print data of node */

cout << node->data << " ";

/* then recur on left subtree */

76 | P a g e
printPreorder(node->left);

/* now recur on right subtree */

printPreorder(node->right);

/* Driver program to test above functions*/

int main()

struct Node* root = newNode(1);

root->left = newNode(2);

root->right = newNode(3);

root->left->left = newNode(4);

root->left->right = newNode(5);

cout << "\nPreorder traversal of binary tree is \n";

printPreorder(root);

77 | P a g e
cout << "\nInorder traversal of binary tree is \n";

printInorder(root);

cout << "\nPostorder traversal of binary tree is \n";

printPostorder(root);

return 0;

Insertion:

An element can be inserted in an array at a specific position. For this operation to


succeed, the array must have enough capacity. Suppose we want to add an
element 10 at index 2 in the below-illustrated array, then the elements after index 1
must get shifted to their adjacent right to make way for a new element.

78 | P a g e
When no position is specified, it’s best to insert the element at the end to avoid
shifting, and this is when we achieve the best runtime O(1).

Deletion:

An element at a specified position can be deleted, creating a void that needs to be


fixed by shifting all the elements to their adjacent left, as illustrated in the figure
below.
We can also bring the last element of the array to fill the void if the relative ordering
is not important. :)

79 | P a g e
11. Graph Implementation, BFS, DFS, Minimum cost spanning tree, shortest path
algorithm.

80 | P a g e
A Graph is a non-linear data structure consisting of nodes and edges. The nodes are sometimes also
referred to as vertices and the edges are lines or arcs that connect any two nodes in the graph. More
formally a Graph can be defined as,
A Graph consists of a finite set of vertices(or nodes) and set of Edges which connect a pair of
nodes.

In the above Graph, the set of vertices V = {0,1,2,3,4} and the set of edges E = {01, 12, 23, 34, 04,
14, 13}.
Graphs are used to solve many real-life problems. Graphs are used to represent networks. The
networks may include paths in a city or telephone network or circuit network. Graphs are also used
in social networks like linkedIn, Facebook. For example, in Facebook, each person is represented
with a vertex(or node). Each node is a structure and contains information like person id, name,
gender, locale etc.

Implementation of BFS algorithm


Now, let's see the implementation of BFS algorithm in java.

In this code, we are using the adjacency list to represent our graph. Implementing the
Breadth-First Search algorithm in Java makes it much easier to deal with the adjacency list
since we only have to travel through the list of nodes attached to each node once the
node is dequeued from the head (or start) of the queue.

In this example, the graph that we are using to demonstrate the code is given as follows
-

81 | P a g e
1. import java.io.*;
2. import java.util.*;
3. public class BFSTraversal
4. {
5. private int vertex; /* total number number of vertices in the graph */
6. private LinkedList<Integer> adj[]; /* adjacency list */
7. private Queue<Integer> que; /* maintaining a queue */
8. BFSTraversal(int v)
9. {
10. vertex = v;
11. adj = new LinkedList[vertex];
12. for (int i=0; i<v; i++)
13. {
14. adj[i] = new LinkedList<>();
15. }
16. que = new LinkedList<Integer>();
17. }
18. void insertEdge(int v,int w)
19. {
20. adj[v].add(w); /* adding an edge to the adjacency list (edges are bidirectional in
this example) */
21. }
22. void BFS(int n)

82 | P a g e
23. {
24. boolean nodes[] = new boolean[vertex]; /* initialize boolean array for holding
the data */
25. int a = 0;
26. nodes[n]=true;
27. que.add(n); /* root node is added to the top of the queue */
28. while (que.size() != 0)
29. {
30. n = que.poll(); /* remove the top element of the queue */
31. System.out.print(n+" "); /* print the top element of the queue */
32. for (int i = 0; i < adj[n].size(); i++) /* iterate through the linked list and push all
neighbors into queue */
33. {
34. a = adj[n].get(i);
35. if (!nodes[a]) /* only insert nodes into queue if they have not been explore
d already */
36. {
37. nodes[a] = true;
38. que.add(a);
39. }
40. }
41. }
42. }
43. public static void main(String args[])
44. {
45. BFSTraversal graph = new BFSTraversal(10);
46. graph.insertEdge(0, 1);
47. graph.insertEdge(0, 2);
48. graph.insertEdge(0, 3);
49. graph.insertEdge(1, 3);
50. graph.insertEdge(2, 4);
51. graph.insertEdge(3, 5);
52. graph.insertEdge(3, 6);
53. graph.insertEdge(4, 7);

83 | P a g e
54. graph.insertEdge(4, 5);
55. graph.insertEdge(5, 2);
56. graph.insertEdge(6, 5);
57. graph.insertEdge(7, 5);
58. graph.insertEdge(7, 8);
59. System.out.println("Breadth First Traversal for the graph is:");
60. graph.BFS(2);
61. }
62. }

Output

Conclusion
In this article, we have discussed the Breadth-first search technique along with its example,
complexity, and implementation in java programming language. Here, we have also seen
the real-life applications of BFS that show it the important data structure algorithm.

Depth First Search (DFS) Algorithm


Depth first search (DFS) algorithm starts with the initial node of the graph G, and then
goes to deeper and deeper until we find the goal node or the node which has no children.
The algorithm, then backtracks from the dead end towards the most recent node that is
yet to be completely unexplored.

The data structure which is being used in DFS is stack. The process is similar to BFS
algorithm. In DFS, the edges that leads to an unvisited node are called discovery edges
while the edges that leads to an already visited node are called block edges.

Algorithm
84 | P a g e
o Step 1: SET STATUS = 1 (ready state) for each node in G
o Step 2: Push the starting node A on the stack and set its STATUS = 2 (waiting state)
o Step 3: Repeat Steps 4 and 5 until STACK is empty
o Step 4: Pop the top node N. Process it and set its STATUS = 3 (processed state)
o Step 5: Push on the stack all the neighbours of N that are in the ready state (whose STATUS
= 1) and set their
STATUS = 2 (waiting state)
[END OF LOOP]
o Step 6: EXIT

Example :
Consider the graph G along with its adjacency list, given in the figure below. Calculate the
order to print all the nodes of the graph starting from node H, by using depth first search
(DFS) algorithm.

Solution :
Push H onto the stack

85 | P a g e
1. STACK : H

POP the top element of the stack i.e. H, print it and push all the neighbours of H onto the
stack that are is ready state.

1. Print H
2. STACK : A

Pop the top element of the stack i.e. A, print it and push all the neighbours of A onto the
stack that are in ready state.

1. Print A
2. Stack : B, D

Pop the top element of the stack i.e. D, print it and push all the neighbours of D onto the
stack that are in ready state.

1. Print D

86 | P a g e
2. Stack : B, F

Pop the top element of the stack i.e. F, print it and push all the neighbours of F onto the
stack that are in ready state.

1. Print F
2. Stack : B

Pop the top of the stack i.e. B and push all the neighbours

1. Print B
2. Stack : C

Pop the top of the stack i.e. C and push all the neighbours.

1. Print C
2. Stack : E, G

Pop the top of the stack i.e. G and push all its neighbours.

87 | P a g e
1. Print G
2. Stack : E

Pop the top of the stack i.e. E and push all its neighbours.

1. Print E
2. Stack :

Hence, the stack now becomes empty and all the nodes of the graph have been traversed.

The printing sequence of the graph will be :

1. H → A → D → F → B → C → G → E

Minimum Spanning tree


A minimum spanning tree can be defined as the spanning tree in which the sum of the
weights of the edge is minimum. The weight of the spanning tree is the sum of the weights
given to the edges of the spanning tree. In the real world, this weight can be considered
as the distance, traffic load, congestion, or any random value.

Example of minimum spanning tree

88 | P a g e
Let's understand the minimum spanning tree with the help of an example.

The sum of the edges of the above graph is 16. Now, some of the possible spanning trees
created from the above graph are -

So, the minimum spanning tree that is selected from the above spanning trees for the
given weighted graph is -

89 | P a g e
Applications of minimum spanning tree
The applications of the minimum spanning tree are given as follows -

o Minimum spanning tree can be used to design water-supply networks,


telecommunication networks, and electrical grids.
o It can be used to find paths in the map.

Algorithms for Minimum spanning tree


A minimum spanning tree can be found from a weighted graph by using the algorithms
given below -

o Prim's Algorithm
o Kruskal's Algorithm

Dijkstra’s shortest path algorithm | Greedy Algo-7


 Difficulty Level : Medium
 Last Updated : 22 Feb, 2022

Given a graph and a source vertex in the graph, find the shortest paths from the source to
all vertices in the given graph.

90 | P a g e
Dijkstra’s algorithm is very similar to Prim’s algorithm for minimum spanning tree. Like
Prim’s MST, we generate a SPT (shortest path tree) with a given source as a root. We
maintain two sets, one set contains vertices included in the shortest-path tree, other set
includes vertices not yet included in the shortest-path tree. At every step of the algorithm,
we find a vertex that is in the other set (set of not yet included) and has a minimum
distance from the source.
Below are the detailed steps used in Dijkstra’s algorithm to find the shortest path from a
single source vertex to all other vertices in the given graph.
Algorithm
1) Create a set sptSet (shortest path tree set) that keeps track of vertices included in the
shortest-path tree, i.e., whose minimum distance from the source is calculated and
finalized. Initially, this set is empty.
2) Assign a distance value to all vertices in the input graph. Initialize all distance values
as INFINITE. Assign distance value as 0 for the source vertex so that it is picked first.
3) While sptSet doesn’t include all vertices
….a) Pick a vertex u which is not there in sptSet and has a minimum distance value.
….b) Include u to sptSet.
….c) Update distance value of all adjacent vertices of u. To update the distance values,
iterate through all adjacent vertices. For every adjacent vertex v, if the sum of distance
value of u (from source) and weight of edge u-v, is less than the distance value of v, then
update the distance value of v.

Let us understand with the following example:

91 | P a g e
The set sptSet is initially empty and distances assigned to vertices are {0, INF, INF, INF,
INF, INF, INF, INF} where INF indicates infinite. Now pick the vertex with a minimum
distance value. The vertex 0 is picked, include it in sptSet. So sptSet becomes {0}. After
including 0 to sptSet, update distance values of its adjacent vertices. Adjacent vertices of
0 are 1 and 7. The distance values of 1 and 7 are updated as 4 and 8. The following
subgraph shows vertices and their distance values, only the vertices with finite distance
values are shown. The vertices included in SPT are shown in green colour.

Pick the vertex with minimum distance value and not already included in SPT (not in
sptSET). The vertex 1 is picked and added to sptSet. So sptSet now becomes {0, 1}.
Update the distance values of adjacent vertices of 1. The distance value of vertex 2
becomes 12.

Pick the vertex with minimum distance value and not already included in SPT (not in
sptSET). Vertex 7 is picked. So sptSet now becomes {0, 1, 7}. Update the distance values
of adjacent vertices of 7. The distance value of vertex 6 and 8 becomes finite (15 and 9
respectively).

Pick the vertex with minimum distance value and not already included in SPT (not in
sptSET). Vertex 6 is picked. So sptSet now becomes {0, 1, 7, 6}. Update the distance

92 | P a g e
values of adjacent vertices of 6. The distance value of vertex 5 and 8 are updated.

We repeat the above steps until sptSet includes all vertices of the given graph. Finally, we
get the following Shortest Path Tree (SPT).

How to implement the above algorithm?


Recommended: Please solve it on “PRACTICE ” first, before moving on to the
solution.

We use a boolean array sptSet[] to represent the set of vertices included in SPT. If a value
sptSet[v] is true, then vertex v is included in SPT, otherwise not. Array dist[] is used to
store the shortest distance values of all vertices.

 C++
 C
 Java
 Python
 C#
 Javascript

// A C++ program for Dijkstra's single source shortest path algorithm.

// The program is for adjacency matrix representation of the graph

93 | P a g e
#include <iostream>

using namespace std;

#include <limits.h>

// Number of vertices in the graph

#define V 9

// A utility function to find the vertex with minimum distance value, from

// the set of vertices not yet included in shortest path tree

int minDistance(int dist[], bool sptSet[])

// Initialize min value

int min = INT_MAX, min_index;

for (int v = 0; v < V; v++)

if (sptSet[v] == false && dist[v] <= min)

min = dist[v], min_index = v;

94 | P a g e
return min_index;

// A utility function to print the constructed distance array

void printSolution(int dist[])

cout <<"Vertex \t Distance from Source" << endl;

for (int i = 0; i < V; i++)

cout << i << " \t\t"<<dist[i]<< endl;

// Function that implements Dijkstra's single source shortest path algorithm

// for a graph represented using adjacency matrix representation

void dijkstra(int graph[V][V], int src)

int dist[V]; // The output array. dist[i] will hold the shortest

// distance from src to i

bool sptSet[V]; // sptSet[i] will be true if vertex i is included in shortest

95 | P a g e
// path tree or shortest distance from src to i is finalized

// Initialize all distances as INFINITE and stpSet[] as false

for (int i = 0; i < V; i++)

dist[i] = INT_MAX, sptSet[i] = false;

// Distance of source vertex from itself is always 0

dist[src] = 0;

// Find shortest path for all vertices

for (int count = 0; count < V - 1; count++) {

// Pick the minimum distance vertex from the set of vertices not

// yet processed. u is always equal to src in the first iteration.

int u = minDistance(dist, sptSet);

// Mark the picked vertex as processed

sptSet[u] = true;

// Update dist value of the adjacent vertices of the picked vertex.

96 | P a g e
for (int v = 0; v < V; v++)

// Update dist[v] only if is not in sptSet, there is an edge from

// u to v, and total weight of path from src to v through u is

// smaller than current value of dist[v]

if (!sptSet[v] && graph[u][v] && dist[u] != INT_MAX

&& dist[u] + graph[u][v] < dist[v])

dist[v] = dist[u] + graph[u][v];

// print the constructed distance array

printSolution(dist);

// driver program to test above function

int main()

/* Let us create the example graph discussed above */

97 | P a g e
int graph[V][V] = { { 0, 4, 0, 0, 0, 0, 0, 8, 0 },

{ 4, 0, 8, 0, 0, 0, 0, 11, 0 },

{ 0, 8, 0, 7, 0, 4, 0, 0, 2 },

{ 0, 0, 7, 0, 9, 14, 0, 0, 0 },

{ 0, 0, 0, 9, 0, 10, 0, 0, 0 },

{ 0, 0, 4, 14, 10, 0, 2, 0, 0 },

{ 0, 0, 0, 0, 0, 2, 0, 1, 6 },

{ 8, 11, 0, 0, 0, 0, 1, 0, 7 },

{ 0, 0, 2, 0, 0, 0, 6, 7, 0 } };

dijkstra(graph, 0);

return 0;

// This code is contributed by shivanisinghss2110

Output:
Vertex Distance from Source
0 0
1 4
2 12
3 19

98 | P a g e
4 21
5 11
6 9
7 8
8 14

99 | P a g e

You might also like