0% found this document useful (0 votes)
0 views

Unit-3_Notes_Searching_Sorting_DKPJ[1]

The document provides an overview of searching and sorting algorithms, including linear search, binary search, and various sorting methods such as bubble sort, selection sort, and insertion sort. It also discusses hashing techniques, hash functions, and collision resolution methods. Each algorithm is accompanied by its complexity analysis and C programming implementations.

Uploaded by

jjola2512
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views

Unit-3_Notes_Searching_Sorting_DKPJ[1]

The document provides an overview of searching and sorting algorithms, including linear search, binary search, and various sorting methods such as bubble sort, selection sort, and insertion sort. It also discusses hashing techniques, hash functions, and collision resolution methods. Each algorithm is accompanied by its complexity analysis and C programming implementations.

Uploaded by

jjola2512
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 39

BCS301 DATA STRUCTURE

Notes- Searching & Sorting


UNIT-III

Searching: Concept of Searching, Sequential search, Index Sequential Search, Binary


Search. Concept of Hashing & Collision resolution Techniques used in Hashing.
Sorting: Insertion Sort, Selection, Bubble Sort, Quick Sort, Merge Sort, Heap Sort and Radix
Sort.

What is searching?

Searching is the process of finding whether or not a specific value exists in an array. The linear
search or sequential search works by checking every element of the array one by one until a
match is found.

What is searching? Write an algorithm of Linear Search.


Explain linear search with an example.
Write a program in c program to search an element from list using linear search.

Linear Search Algorithm

Algorithm LinearSearch(ARR,N,POS, VAL)


1: SET POS = -1
2: SET I = 0
3: Repeat Step 4 while I <= N
4: IF ARR[I] = VAL
SET POS = I
PRINT POS
Go to Step 6
[END OF IF]
SET I = I + 1
[END OF LOOP]
5: IF POS = –1
PRINT "VALUE IS NOT PRESENT IN THE ARRAY"
[END OF IF]
6: EXIT

Example
Consider the following array. we have to search for an element
X = 8
in the array using linear search.

Array to be searched
Starting from the first element, compare X with each element in the list.

1
Compare with each element
Return the index if item X is found, else return the element not found.

Element found at i = 3
C Example
The linear search algorithm can be implemented in C as follows:
//Linear search in C
#include <stdio.h>
int main()
{
int arr[5] = {4, 1, 6, 8, 3};
int x = 8, n = 5, pos = -1;
//compare x with each element in array
for (int i = 0; i < n; i++)
{
if (arr[i] == x)
{
//Print element found and exit
pos = i;
printf("Element found at position %d ", pos);
break;
}
}
if (pos == -1)
printf("Item Not found");
return 0;
}

Linear Search Complexity


The linear search takes O(n) time to execute, where n is the number of elements in the array.

2
Binary Search

Write an algorithm of Binary search.


Explain binary search with example.
Write a program in c language to search an element in the given list with using binary
search.

Searching is the process of finding whether or not a specific value exists in an array. The binary
search algorithm can be used to search for an element in a sorted array.

Binary Search Algorithm

Algorithms BinarySearch(Array A,Low,High,Item, POS)


1: SET LOW = 0, HIGH = SIZE-1, POS=-1
2: Repeat Steps 3 and 4 while LOW <= HIGH
3: SET MID = (LOW + HIGH)/2
4: IF A[MID] = VAL
SET POS = MID
PRINT POS
Go to Step 6
ELSE IF A[MID] > VAL
SET HIGH = MID-1
ELSE
SET LOW = MID+1
[END OF IF]
[END OF LOOP]
5: IF POS=-1
PRINT “VALUE IS NOT PRESENT IN THE ARRAY”
[END OF IF]
6: EXIT

Working of Binary Search

The binary search algorithm works as follows:


• The array is divided into two halves by a middle element.
• If the element is found in the middle, the position is returned; otherwise
• If the value is less than the middle, then the first half of the array is searched.
• If the value is greater than the middle, then the second half of the array is
searched.
Consider the following array of 7 elements where we have to search for element 3.

Array to be searched
Set two pointers low and high.

Setting low and high pointers


Find the middle element of the array.

3
Finding the middle element
Since the item is less than the middle element, ie 3 < 5
, we have to search in the first half of the array. This is done by updating the high pointer as
high = mid - 1
.

Updating low and high pointers


Now repeat the same steps until low meets high.

New mid element


x = mid =3, element is found.

Element found

C Program : Binary Search

The binary search algorithm can be implemented in C as follows:

#include <stdio.h>
int main(void)
{
int arr[7] = {2, 3, 4, 5, 6, 7, 8};
int n=7, x=3, low=0, high=n-1, pos=-1;
while (low <= high)
{
int mid = low + (high) / 2;
if (arr[mid] == x)
pos=mid;
if (arr[mid] < x)
low = mid + 1;
else
high = mid - 1;
}
if(pos == -1)
printf("Element not found in array");
else
printf("Element found at position %d", pos+1);
return 0;
}

Binary Search Complexity


The binary search takes O(log n) time to execute, where n is the number of elements in the array.
4
Hashing
Hashing is a technique of mapping a large set of data into tabular indices. Hashing Insert or
delete data in O(1) time.

The importance of Hashing


We know that the time complexity of search algorithms depends upon the number of elements
in the list.
Linear search and binary search algorithms have a time complexity of O(n) and O(log n)
respectively. Hashing is a search technique which is independent of the number of elements in
the list. It uniquely identify a specific item from a group of similar items. Therefore, hashing
allows searching, updating and retrieval of data in a constant time, that is O(1).

Hash table
A hash table is a data structure that stores data as key-value pairs. Each key is matched to a
value in the hash table.

Key and Value in Hash table


• Keys are used to index the data.
• Value specifies the data associated with the keys.

Hash Function
A hash function is a mathematical formula, used for mapping keys into table indices. This
process of mapping the keys to corresponding indices in a hash table is called hashing.
Assume k is a key and h(x) is a hash function. In a hash table, an element linked with key k
will be stored at the index h(k).
.
The following figure shows a hash table in which each key from the set K is mapped to indices
generated by a hash function.

Hash table
Characteristics of good hash function
• It should be very simple and quick to compute.
• The cost of execution must be low.
• It should distribute the hash addresses as evenly as possible within L, so there are fewer
collisions.
• The same hash value must be generated for a given input value.

5
Type of Hash functions
Following are hash functions which use numeric keys.
Division
Multiplication
Mid square
Folding

Division Method
Choose a number m larger than the number n of keys in k. m is normally chosen to be a prime
number. This method divides x by m and then uses the reminder obtained. The hash function is
given by
H(k) = k mod m
For example: Suppose k = 1234, setting m=97, hash values can be calculated as:
h(1234) = 1234 % 97 = 70

Multiplication Method
The following steps are involved in multiplication method.
1. Choose a constant A such that 0 < A < 1
2. Multiply k by A
3. Extract the fractional part
4. Multiply the result by the size of hash table (m).
The hash function can be given as:
h(k) = ⌊ m (kA mod 1) ⌋
The best choice for A is 0.6180339887.
For example: Suppose k = 12345 and the size of the table is 1000. The hash value can be
calculated as:
h(12345) = ⌊ 1000 (12345 * 0.618033 mod 1) ⌋
= ⌊ 1000 (7629.617385 mod 1) ⌋
= ⌊ 1000 (0.617385) ⌋
= ⌊ 617.385 ⌋
= 617
Mid-Square Method
The mid-square method works in the following steps:
1. Find square of the key.
2. Extract the middle r digits of the result.
The hash function can be given by:
h(k) = s
where s is obtained by selecting r digits of k2.
For example: Suppose k = 3205. Then let’s find the hash value for a hash table of size 100.
Since the index of hash table varies between 0 and 99, we can choose
r = 2
. Then:
k = 3205, k2 = 10272025, h(3505) = 72

Folding Method
• The key k is divided into a number of parts of same length k1, k2, … , kr.
• And they are added together ignoring last carry if any.
The hash function can be given by:
h(k) = k1 + k2 + … + kr

Given a hash table of 100 locations, calculate the hash value using folding method for keys
5678.

6
key 6789
Parts 67 and 89
Sum 156
Hash Value 34

Hashing Collision and Collision Resolution


When hash function maps two different keys to the same location, collision is occurred. Since
we cannot store two records in the same location we solve the problem by using collision
resolution techniques.
Two most popular collision resolution techniques are:
• Open Addressing (Linear Probing, Quadratic Probing , Doubly Probing
• Chaining

Sorting
Bubble Sort
Algorithm
We assume list is an array of n elements. We further assume that swapfunction swaps the
values of the given array elements.

Algorithm BubbleSort(list ,n)


begin
for all elements of list
if list[i] > list[i+1]
swap(list[i], list[i+1])
end if
end for
return list
end BubbleSort

Example : Bubble Sort


We take an unsorted array for our example.
Input: arr[] = {6, 3, 0, 5}
First Pass:
The largest element is placed in its correct position, i.e., the end of the array.

7
Second Pass:
Place the second largest element at correct position

Third Pass:
Place the remaining two elements at their correct positions.

Complexity of Bubble Sort : O(n)

C Program : Bubble sort

#include<stdio.h>
void main()
{
int i,j,n,a[20],temp;

printf("n : ");
scanf("%d",&n);
for(i=0;i<n;i++)
{ printf("a[%d] = ",i);
scanf("%d",&a[i]);
}
printf("Before sorting : ");
for(i=0;i<n;i++)
{ printf("\na[%d] = %d",i,a[i]); }

for (i = 0; i < n - 1; i++)


{
for (j = 0; j < n - i - 1; j++)
{
if (a[j] > a[j + 1])
8
{
temp = a[j];
a[j] = a[j + 1];
a[j + 1] = temp;
}
}
}

printf("\nAfter sorting : ");


for(i=0;i<n;i++)
{ printf("\na[%d] = %d",i,a[i]); }
printf("\n");
}

Selection Sort
How to work selection sort method?
Step 1 − Set MIN to location 0
Step 2 − Search the minimum element in the list
Step 3 − Swap with value at location MIN
Step 4 − Increment MIN to point to next element
Step 5 − Repeat until list is sorted

Algorithm : Selection Sort


Algorithm SelectionSort(A[ ],N)
Begin
Step 1 :start
Step 2: Repeat For K = 0 to N – 2
Begin
Step 3 : Set MIN = K
Step 4 : Repeat for J = K + 1 to N – 1
Begin
If A[ J ] < A [ MIN ]
Set MIN = J
End For
Step 5 : Swap A [ K ] with A [ MIN ]
End For
Step 6 : stop
9
Example: Selection Sort
Consider the following depicted array as an example.

For the first position in the sorted list, the whole list is scanned sequentially. The first position where
14 is stored presently, we search the whole list and find that 10 is the lowest value.

So we replace 14 with 10. After one iteration 10, which happens to be the minimum value in the list,
appears in the first position of the sorted list.

For the second position, where 33 is residing, we start scanning the rest of the list in a linear manner.

We find that 14 is the second lowest value in the list and it should appear at the second place. We
swap these values.

After two iterations, two least values are positioned at the beginning in a sorted manner.

10
C Program : Selection Sort
#include <stdio.h>
int main()
{
int arr[5], length = 5, i, j, temp,n,min;
printf("Enter the number of elements : ");
scanf("%d",&n);
printf("Enter %d numbers : ",n);
for (i = 0; i < n; i++)
{
scanf("%d",&arr[i]);
}
for (i = 0; i < n-1; i++)
{
min = i;
for (j = i+1; j < n; j++)
if (arr[j] < arr[min])
min = j;
temp=arr[min];
arr[min]=arr[i];
arr[i]=temp;
}
printf("Sorted array is : ");
for (i = 0; i < n; i++)
{
printf(" %d ",arr[i]);
}
return 0;
}
Output

Enter the number of elements : 5


Enter 5 numbers : 11 99 44 77 22
Sorted array is : 11 22 44 77 99

11
Insertion Sort

Insertion sort works similar to the sorting of playing cards in hands. It is assumed that the
first card is already sorted in the card game, and then we select an unsorted card. If the
selected unsorted card is greater than the first card, it will be placed at the right side;
otherwise, it will be placed at the left side. Similarly, all unsorted cards are taken and put in
their exact place.

The same approach is applied in insertion sort. The idea behind the insertion sort is that first
take one element, iterate it through the sorted array. Although it is simple to use, it is not
appropriate for large data sets as the time complexity of insertion sort in the average case
and worst case is O(n2), where n is the number of items. Insertion sort is less efficient than
the other sorting algorithms like heap sort, quick sort, merge sort, etc.

Insertion sort has various advantages such as -


o Simple implementation
o Efficient for small data sets
o Adaptive, i.e., it is appropriate for data sets that are already substantially sorted.
Now, let's see the algorithm of insertion sort.
Algorithm
The simple steps of achieving the insertion sort are listed as follows -
Step 1 - If the element is the first element, assume that it is already sorted. Return 1.
Step2 - Pick the next element, and store it separately in a key.
Step3 - Now, compare the key with all elements in the sorted array.
Step 4 - If the element in the sorted array is smaller than the current element, then move to
the next element. Else, shift greater elements in the array towards the right.
Step 5 - Insert the value.
Step 6 - Repeat until the array is sorted.

Algorithm insertionSort(array A)
Begin
for i = 1 to length(A) - 1
key ← A[i]
j ← i - 1
while j >= 0 and A[j] > key
A[j + 1] ← A[j]
j ← j - 1
end while
A[j + 1] ← key
end for
End

12
Working of Insertion sort Algorithm
Now, let's see the working of the insertion sort Algorithm.
To understand the working of the insertion sort algorithm, let's take an unsorted array. It
will be easier to understand the insertion sort via an example.
Let the elements of array are -

Initially, the first two elements are compared in insertion sort.

Here, 31 is greater than 12. That means both elements are already in ascending order. So,
for now, 12 is stored in a sorted sub-array.

Now, move to the next two elements and compare them.

Here, 25 is smaller than 31. So, 31 is not at correct position. Now, swap 31 with 25. Along
with swapping, insertion sort will also check it with all elements in the sorted array.
For now, the sorted array has only one element, i.e. 12. So, 25 is greater than 12. Hence, the
sorted array remains sorted after swapping.

Now, two elements in the sorted array are 12 and 25. Move forward to the next elements
that are 31 and 8.

Both 31 and 8 are not sorted. So, swap them.

After swapping, elements 25 and 8 are unsorted.

So, swap them.

Now, elements 12 and 8 are unsorted.

13
So, swap them too.

Now, the sorted array has three items that are 8, 12 and 25. Move to the next items that are
31 and 32.

Hence, they are already sorted. Now, the sorted array includes 8, 12, 25 and 31.

Move to the next elements that are 32 and 17.

17 is smaller than 32. So, swap them.

Swapping makes 31 and 17 unsorted. So, swap them too.

Now, swapping makes 25 and 17 unsorted. So, perform swapping again.

Now, the array is completely sorted.


Insertion sort complexity
Now, let's see the time complexity of insertion sort in best case, average case, and in worst
case. We will also see the space complexity of insertion sort.
1. Time Complexity
Case Time Complexity
Best Case O(n)
Average Case O(n2)
Worst Case O(n2)

o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is
already sorted. The best-case time complexity of insertion sort is O(n).
o Average Case Complexity - It occurs when the array elements are in jumbled order
that is not properly ascending and not properly descending. The average case time
complexity of insertion sort is O(n2).

14
o Worst Case Complexity - It occurs when the array elements are required to be
sorted in reverse order. That means suppose you have to sort the array elements in
ascending order, but its elements are in descending order. The worst-case time
complexity of insertion sort is O(n2).

Program: Write a program to implement insertion sort in C language.

1. #include <stdio.h>
2.
3. void insert(int a[], int n) /* function to sort an aay with insertion sort */
4. {
5. int i, j, temp;
6. for (i = 1; i < n; i++) {
7. temp = a[i];
8. j = i - 1;
9.
10. while(j>=0 && temp <= a[j]) /* Move the elements greater than temp to one p
osition ahead from their current position*/
11. {
12. a[j+1] = a[j];
13. j = j-1;
14. }
15. a[j+1] = temp;
16. }
17. }
18.
19. void printArr(int a[], int n) /* function to print the array */
20. {
21. int i;
22. for (i = 0; i < n; i++)
23. printf("%d ", a[i]);
24. }
25.
26. int main()
27. {
28. int a[] = { 12, 31, 25, 8, 32, 17 };
29. int n = sizeof(a) / sizeof(a[0]);
30. printf("Before sorting array elements are - \n");
31. printArr(a, n);
32. insert(a, n);

15
33. printf("\nAfter sorting array elements are - \n");
34. printArr(a, n);
35.
36. return 0;
37. }

Output:

Quick Sort (Method Algorithm from Horowitz Sahani)

Example

16
Quick Sort (another way)
Quick Sort is a sorting algorithm that works based on the divide-and-conquer strategy. It
works as follows:
1. Choose an element as pivot from the array.
2. Position the pivot element in such a way that all elements less than the pivot appear
before it and all elements greater than the pivot appear after it (equal values can go
either way).
3. Sort the two sub-arrays recursively.
The pivot element can be selected in any of the following ways:
• Pick first element as pivot (Explained Here).
• Pick last element as pivot.
• Pick a random element as pivot.
• Pick median as pivot.

Working of QuickSort Algorithm


Suppose we have to sort the following array of 6 elements using the quicksort algorithm.

1. Choose the pivot element


Here we choose the first element as the pivot.

Selecting a pivot element


2. Rearrange the elements
Now we have to rearrange the array elements in such a way that all elements less than the
pivot appears before it and all elements greater than the pivot appears after it as shown in the
figure.

17
Array after rearranging elements

The following steps are done for rearranging the array:


Create three pointers,
pivot = 0
,
left = 0
and
right = n-1
.
Then, the comparison is done as follows:
1. Starting from right compare each element with pivot.
1. If pivot < right then, continue comparing until right = pivot.
2. If pivot > right then, swap two values and goto step 2.
3. Set pivot = right.
2. Starting from left compare each element with pivot
1. If pivot > left then, continue comparing until left = pivot.
2. If pivot < left then, swap two values and goto step 1.
3. Set pivot = left.
Set the pointers and start comparing from the right.

Since
pivot < right
is true, continue comparing with the next element.

Now
pivot > right
, so swap them and set
pivot = right
.

Start comparing from left to right. Since


pivot > left
, continue comparing.

18
Since pivot < left , swap them and set pivot = left
.

Continue the process until we get the following array where the pivot is placed in the correct
position.

3. Dividing Array
Pivot elements are chosen separately for the left and right sub-arrays and step 2 is then
repeated. We get the following array after the quicksort algorithm has been completed.

19
The formal algorithm for quicksort is given below:
Algorithm: PARTITION (ARR, BEG, END, LOC)
Algorithm PARTITION(ARR,BEG,END,LOC)
1: SET LEFT = BEG, RIGHT = END, PIVOT = BEG, FLAG = 0
2: Repeat Steps 3 to 6 while FLAG = 0
3: Repeat while ARR[PIVOT] <= ARR[RIGHT] AND PIVOT != RIGHT
SET RIGHT = RIGHT-1
[END OF LOOP]
4: IF PIVOT = RIGHT
SET FLAG=1
ELSE IF ARR[PIVOT] > ARR[RIGHT]
SWAP ARR[PIVOT] with ARR[RIGHT]
SET PIVOT = RIGHT
[END OF IF]
5: IF FLAG = 0
Repeat while ARR[PIVOT] >= ARR[LEFT] AND PIVOT != LEFT
SET LEFT = LEFT+1
[END OF LOOP]
6: IF PIVOT = LEFT
SET FLAG = 1
ELSE IF ARR[PIVOT] < ARR[LEFT]
SWAP ARR[PIVOT] with ARR[LEFT]
SET PIVOT = LEFT
[END OF IF]
[END OF IF]
7: [END OF LOOP]
8: END

Algorithm: QUICK_SORT (ARR, BEG, END)


Algorithm QUICKSORT(ARR,BEG,END)
1: IF (BEG < END)
CALL PARTITION (ARR, BEG, END, LOC)
CALL QUICKSORT(ARR, BEG, LOC-1)
CALL QUICKSORT(ARR, LOC+1, END)
[END OF IF]
2: END

C Program :Quicksort
#include <stdio.h>
#include <conio.h>
int partition(int a[], int beg, int end);
void quick_sort(int a[], int beg, int end);
int main()
{
int arr[6]={26, 10, 35, 18, 25, 44}, i, n=6;
quick_sort(arr, 0, n - 1);
printf("\n The sorted array is: \n");
for (i = 0; i < n; i++)
printf(" %d\t", arr[i]);
return 0;
}

int partition(int a[], int beg, int end)


{
int left, right, temp, pivot, flag;
pivot = left = beg;
right = end;
20
flag = 0;
while (flag != 1)
{
while ((a[pivot] <= a[right]) && (pivot != right))
right--;
if (pivot == right)
flag = 1;
else if (a[pivot] > a[right])
{
temp = a[pivot];
a[pivot] = a[right];
a[right] = temp;
pivot = right;
}
if (flag != 1)
{
while ((a[pivot] >= a[left]) && (pivot != left))
left++;
if (pivot == left)
flag = 1;
else if (a[pivot] < a[left])
{
temp = a[pivot];
a[pivot] = a[left];
a[left] = temp;
pivot = left;
}
}
}
return pivot;
}

void quick_sort(int a[], int beg, int end)


{
int pivot;
if (beg < end)
{
pivot = partition(a, beg, end);
quick_sort(a, beg, pivot - 1);
quick_sort(a, pivot + 1, end);
}
}
The Complexity of QuickSort Algorithm

• Best case – O(n log n)


• Worst-case – O(n2)
• Average case – O(n log n)

Pros and Cons of QuickSort

Quicksort is faster than other algorithms such as bubble sort, selection sort, and insertion sort. It
can be used to sort arrays of various sizes. On the other hand, quicksort is complicated and
massively recursive.
21
Merge Sort,
The merge sort algorithm is an implementation of the divide and conquer technique. Thus, it gets
completed in three steps:
1. Divide: In this step, the array/list divides itself recursively into sub-arrays until the base case is
reached.
2. Recursively solve: Here, the sub-arrays are sorted using recursion.
3. Combine: This step makes use of the merge( ) function to combine the sub-arrays into the final
sorted array.
Algorithm for Merge Sort

Step 1: Find the middle index of the array.


Middle = 1 + (last – first)/2
Step 2: Divide the array from the middle.
Step 3: Call merge sort for the first half of the array
MergeSort(array, first, middle)
Step 4: Call merge sort for the second half of the array.
MergeSort(array, middle+1, last)
Step 5: Merge the two sorted halves into a single sorted array.

Algorithm

Algorithm MERGE_SORT(arr, beg, end)


1. if beg < end
set mid = (beg + end)/2
MERGE_SORT(arr, beg, mid)
MERGE_SORT(arr, mid + 1, end)
MERGE (arr, beg, mid, end)
2. end of if

3. END MERGE_SORT

Algorithm MERGE (arr, beg, mid, end)


1 Begin
int i, j, k;
int n1 = mid - beg + 1;
int n2 = end - mid;
int LeftArray[n1], RightArray[n2]; //temporary arrays
2 for (int i = 0; i < n1; i++)
LeftArray[i] = a[beg + i];
3 for (int j = 0; j < n2; j++)
RightArray[j] = a[mid + 1 + j];

i = 0; /* initial index of first sub-array */


j = 0; /* initial index of second sub-array */
k = beg; /* initial index of merged sub-array */

22
4 while (i < n1 && j < n2)
{
if(LeftArray[i] <= RightArray[j])
{
a[k] = LeftArray[i];
i++;
}
else
{
a[k] = RightArray[j];
j++;
}
k++;
}
5 while (i<n1)
{
a[k] = LeftArray[i];
i++;
k++;
}

6 while (j<n2)
{
a[k] = RightArray[j];
j++;
k++;
}
7 end

Example : Merge Sort


Let the elements of array are –

According to the merge sort, first divide the given array into two equal halves. Merge sort keeps
dividing the list into equal parts until it cannot be further divided.

As there are eight elements in the given array, so it is divided into two arrays of size 4.

Now, again divide these two arrays into halves. As they are of size 4, so divide them into new arrays
of size 2.

23
Now, again divide these arrays to get the atomic value that cannot be further divided.

Now, combine them in the same manner they were broken.

In combining, first compare the element of each array and then combine them into another array in
sorted order.

So, first compare 12 and 31, both are in sorted positions. Then compare 25 and 8, and in the list of
two values, put 8 first followed by 25. Then compare 32 and 17, sort them and put 17 first followed
by 32. After that, compare 40 and 42, and place them sequentially.

In the next iteration of combining, now compare the arrays with two data values and merge them into
an array of found values in sorted order.

Now, there is a final merging of the arrays. After the final merging of above arrays, the array will
look like -

Now, the array is completely sorted.

Merge sort complexity


Now, let's see the time complexity of merge sort in best case, average case, and in worst case. We
will also see the space complexity of the merge sort.
1. Time Complexity
Case Time Complexity

Best Case O(n*logn)

Average Case O(n*logn)

Worst Case O(n*logn)

24
C Program : Merge Sort

1. #include <stdio.h>
2.
3. /* Function to merge the subarrays of a[] */
4. void merge(int a[], int beg, int mid, int end)
5. {
6. int i, j, k;
7. int n1 = mid - beg + 1;
8. int n2 = end - mid;
9.
10. int LeftArray[n1], RightArray[n2]; //temporary arrays
11.
12. /* copy data to temp arrays */
13. for (int i = 0; i < n1; i++)
14. LeftArray[i] = a[beg + i];
15. for (int j = 0; j < n2; j++)
16. RightArray[j] = a[mid + 1 + j];
17.
18. i = 0; /* initial index of first sub-array */
19. j = 0; /* initial index of second sub-array */
20. k = beg; /* initial index of merged sub-array */
21.
22. while (i < n1 && j < n2)
23. {
24. if(LeftArray[i] <= RightArray[j])
25. {
26. a[k] = LeftArray[i];
27. i++;
28. }
29. else
30. {
31. a[k] = RightArray[j];
32. j++;
33. }
34. k++;
35. }
36. while (i<n1)
37. {

25
38. a[k] = LeftArray[i];
39. i++;
40. k++;
41. }
42.
43. while (j<n2)
44. {
45. a[k] = RightArray[j];
46. j++;
47. k++;
48. }
49. }

50. void mergeSort(int a[], int beg, int end)


51. {
52. if (beg < end)
53. {
54. int mid = (beg + end) / 2;
55. mergeSort(a, beg, mid);
56. mergeSort(a, mid + 1, end);
57. merge(a, beg, mid, end);
58. }
59. }

60. /* Function to print the array */


61. void printArray(int a[], int n)
62. {
63. int i;
64. for (i = 0; i < n; i++)
65. printf("%d ", a[i]);
66. printf("\n");
67. }

68. int main()


69. {
70. int a[] = { 12, 31, 25, 8, 32, 17, 40, 42 };
71. int n = sizeof(a) / sizeof(a[0]);
72. printf("Before sorting array elements are - \n");
73. printArray(a, n);
74. mergeSort(a, 0, n - 1);
26
75. printf("After sorting array elements are - \n");
76. printArray(a, n);
77. return 0;
78. }

Heap Sort
Heap sort processes the elements by creating the min-heap or max-heap using the
elements of the given array. Min-heap or max-heap represents the ordering of array in
which the root element represents the minimum or maximum element of the array.
Heap sort basically recursively performs two main operations -
o Build a heap H, using the elements of array.
o Repeatedly delete the root element of the heap formed in 1st phase.
Before knowing more about the heap sort, let's first see a brief description of Heap.

What is a heap?
A heap is a complete binary tree, and the binary tree is a tree in which the node can have
the utmost two children. A complete binary tree is a binary tree in which all the levels except
the last level, i.e., leaf node, should be completely filled, and all the nodes should be left-
justified.
What is heap sort?
Heapsort is a popular and efficient sorting algorithm. The concept of heap sort is to
eliminate the elements one by one from the heap part of the list, and then insert them into
the sorted part of the list.

Heapsort is the in-place sorting algorithm.


What is heap sort?
Heapsort is a popular and efficient sorting algorithm. The concept of heap sort is to
eliminate the elements one by one from the heap part of the list, and then insert them into
the sorted part of the list.
Heapsort is the in-place sorting algorithm.

Algorithm
1. HeapSort(arr)
2. BuildMaxHeap(arr)
3. for i = length(arr) to 2
4. swap arr[1] with arr[i]
5. heap_size[arr] = heap_size[arr] ? 1
6. MaxHeapify(arr,1)
7. End

27
BuildMaxHeap(arr)
1. BuildMaxHeap(arr)
2. heap_size(arr) = length(arr)
3. for i = length(arr)/2 to 1
4. MaxHeapify(arr,i)
5. End

MaxHeapify(arr,i)
1. MaxHeapify(arr,i)
2. L = left(i)
3. R = right(i)
4. if L ? heap_size[arr] and arr[L] > arr[i]
5. largest = L
6. else
7. largest = i
8. if R ? heap_size[arr] and arr[R] > arr[largest]
9. largest = R
10. if largest != i
11. swap arr[i] with arr[largest]
12. MaxHeapify(arr,largest)
13. End

Example : Heap Sort


In heap sort, basically, there are two phases involved in the sorting of elements. By using the
heap sort algorithm, they are as follows –
o The first step includes the creation of a heap by adjusting the elements of the array.
o After the creation of heap, now remove the root element of the heap repeatedly by
shifting it to the end of the array, and then store the heap structure with the
remaining elements.

Now let's see the working of heap sort in detail by using an example. To understand it more
clearly, let's take an unsorted array and try to sort it using heap sort. It will make the
explanation clearer and easier.

First, we have to construct a heap from the given array and convert it into max heap.

28
After converting the given heap into max heap, the array elements are -

Next, we have to delete the root element (89) from the max heap. To delete this node, we
have to swap it with the last node, i.e. (11). After deleting the root element, we again have
to heapify it to convert it into max heap.

After swapping the array element 89 with 11, and converting the heap into max-heap, the
elements of array are -

In the next step, again, we have to delete the root element (81) from the max heap. To
delete this node, we have to swap it with the last node, i.e. (54). After deleting the root
element, we again have to heapify it to convert it into max heap.

After swapping the array element 81 with 54 and converting the heap into max-heap, the elements of
array are -

29
In the next step, we have to delete the root element (76) from the max heap again. To delete this
node, we have to swap it with the last node, i.e. (9). After deleting the root element, we again have to
heapify it to convert it into max heap.

After swapping the array element 76 with 9 and converting the heap into max-heap, the
elements of array are -

In the next step, again we have to delete the root element (54) from the max heap. To
delete this node, we have to swap it with the last node, i.e. (14). After deleting the root
element, we again have to heapify it to convert it into max heap.

After swapping the array element 54 with 14 and converting the heap into max-heap, the
elements of array are -

n the next step, again we have to delete the root element (22) from the max heap. To
delete this node, we have to swap it with the last node, i.e. (11). After deleting the root
element, we again have to heapify it to convert it into max heap.

After swapping the array element 22 with 11 and converting the heap into max-heap, the
elements of array are -

30
In the next step, again we have to delete the root element (14) from the max heap. To
delete this node, we have to swap it with the last node, i.e. (9). After deleting the root
element, we again have to heapify it to convert it into max heap.

After swapping the array element 14 with 9 and converting the heap into max-heap, the
elements of array are -

In the next step, again we have to delete the root element (11) from the max heap. To
delete this node, we have to swap it with the last node, i.e. (9). After deleting the root
element, we again have to heapify it to convert it into max heap.

After swapping the array element 11 with 9, the elements of array are -

Now, heap has only one element left. After deleting it, heap will be empty.

After completion of sorting, the array elements are -

Now, the array is completely sorted.

Heap sort complexity


Now, let's see the time complexity of Heap sort in the best case, average case, and worst
case. We will also see the space complexity of Heapsort.

31
1. Time Complexity
Case Time Complexity

Best Case O(n logn)

Average Case O(n log n)

Worst Case O(n log n)

o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is already
sorted. The best-case time complexity of heap sort is O(n logn).
o Average Case Complexity - It occurs when the array elements are in jumbled order that is
not properly ascending and not properly descending. The average case time complexity of
heap sort is O(n log n).
o Worst Case Complexity - It occurs when the array elements are required to be sorted in
reverse order. That means suppose you have to sort the array elements in ascending order,
but its elements are in descending order. The worst-case time complexity of heap sort is O(n
log n).

The time complexity of heap sort is O(n logn) in all three cases (best case, average case,
and worst case). The height of a complete binary tree having n elements is logn.
Space Complexity
Space Complexity O(1)

Stable N0

o The space complexity of Heap sort is O(1).

Program : Heap Sort


1. #include <stdio.h>
2. /* function to heapify a subtree. Here 'i' is the
3. index of root node in array a[], and 'n' is the size of heap. */
4. void heapify(int a[], int n, int i)
5. {
6. int largest = i; // Initialize largest as root
7. int left = 2 * i + 1; // left child
8. int right = 2 * i + 2; // right child
9. // If left child is larger than root
10. if (left < n && a[left] > a[largest])
11. largest = left;
12. // If right child is larger than root
13. if (right < n && a[right] > a[largest])
32
14. largest = right;
15. // If root is not largest
16. if (largest != i) {
17. // swap a[i] with a[largest]
18. int temp = a[i];
19. a[i] = a[largest];
20. a[largest] = temp;
21.
22. heapify(a, n, largest);
23. }
24. }
25. /*Function to implement the heap sort*/
26. void heapSort(int a[], int n)
27. {
28. for (int i = n / 2 - 1; i >= 0; i--)
29. heapify(a, n, i);
30. // One by one extract an element from heap
31. for (int i = n - 1; i >= 0; i--) {
32. /* Move current root element to end*/
33. // swap a[0] with a[i]
34. int temp = a[0];
35. a[0] = a[i];
36. a[i] = temp;
37.
38. heapify(a, i, 0);
39. }
40. }
41. /* function to print the array elements */
42. void printArr(int arr[], int n)
43. {
44. for (int i = 0; i < n; ++i)
45. {
46. printf("%d", arr[i]);
47. printf(" ");
48. }
49.
50. }
51. int main()
52. {
53. int a[] = {48, 10, 23, 43, 28, 26, 1};
33
54. int n = sizeof(a) / sizeof(a[0]);
55. printf("Before sorting array elements are - \n");
56. printArr(a, n);
57. heapSort(a, n);
58. printf("\nAfter sorting array elements are - \n");
59. printArr(a, n);
60. return 0;
61. }

Output

Radix Sort.

Radix sort Algorithm. Radix sort is the linear sorting algorithm that is used for integers. In
Radix sort, there is digit by digit sorting is performed that is started from the least
significant digit to the most significant digit.

The process of radix sort works similar to the sorting of students names, according to the
alphabetical order. In this case, there are 26 radix formed due to the 26 alphabets in English.
In the first pass, the names of students are grouped according to the ascending order of the
first letter of their names. After that, in the second pass, their names are grouped according
to the ascending order of the second letter of their name. And the process continues until
we find the sorted list.

Algorithm
1. radixSort(arr)
2. max = largest element in the given array
3. d = number of digits in the largest element (or, max)
4. Now, create d buckets of size 0 - 9
5. for i -> 0 to d
6. sort the array elements using counting sort (or any stable sort) according to the digit
s at
7. the ith place

34
Working of Radix sort Algorithm

o First, we have to find the largest element (suppose max) from the given array. Suppose 'x' be
the number of digits in max. The 'x' is calculated because we need to go through the
significant places of all elements.
o After that, go through one by one each significant place. Here, we have to use any stable
sorting algorithm to sort the digits of each significant place.

Now let's see the working of radix sort in detail by using an example. To understand it more clearly,
let's take an unsorted array and try to sort it using radix sort. It will make the explanation clearer and
easier.

In the given array, the largest element is 736 that have 3 digits in it. So, the loop will run up to three
times (i.e., to the hundreds place). That means three passes are required to sort the array.

Now, first sort the elements on the basis of unit place digits (i.e., x = 0). Here, we are using the
counting sort algorithm to sort the elements.
Pass 1:
In the first pass, the list is sorted on the basis of the digits at 0's place.

After the first pass, the array elements are -

Pass 2:
In this pass, the list is sorted on the basis of the next significant digits (i.e., digits at
10th place).

35
After the second pass, the array elements are -

Pass 3:
In this pass, the list is sorted on the basis of the next significant digits (i.e., digits at
100th place).

After the third pass, the array elements are -

Now, the array is sorted in ascending order.

Radix sort complexity


Now, let's see the time complexity of Radix sort in best case, average case, and worst case.
We will also see the space complexity of Radix sort.
36
1. Time Complexity
Case Time Complexity

Best Case Ω(n+k)

Average Case θ(nk)

Worst Case O(nk)

o Best Case Complexity - It occurs when there is no sorting required, i.e. the array is
already sorted. The best-case time complexity of Radix sort is Ω(n+k).
o Average Case Complexity - It occurs when the array elements are in jumbled order
that is not properly ascending and not properly descending. The average case time
complexity of Radix sort is θ(nk).
o Worst Case Complexity - It occurs when the array elements are required to be
sorted in reverse order. That means suppose you have to sort the array elements in
ascending order, but its elements are in descending order. The worst-case time
complexity of Radix sort is O(nk).

Radix sort is a non-comparative sorting algorithm that is better than the comparative
sorting algorithms. It has linear time complexity that is better than the comparative
algorithms with complexity O(n logn).

C Program: Radix Sort

Program: Write a program to implement Radix sort in C language.


1. #include <stdio.h>
2.
3. int getMax(int a[], int n) {
4. int max = a[0];
5. for(int i = 1; i<n; i++) {
6. if(a[i] > max)
7. max = a[i];
8. }
9. return max; //maximum element from the array
10. }
11.
12. void countingSort(int a[], int n, int place) // function to implement counting sort

37
13. {
14. int output[n + 1];
15. int count[10] = {0};
16.
17. // Calculate count of elements
18. for (int i = 0; i < n; i++)
19. count[(a[i] / place) % 10]++;
20.
21. // Calculate cumulative frequency
22. for (int i = 1; i < 10; i++)
23. count[i] += count[i - 1];
24.
25. // Place the elements in sorted order
26. for (int i = n - 1; i >= 0; i--) {
27. output[count[(a[i] / place) % 10] - 1] = a[i];
28. count[(a[i] / place) % 10]--;
29. }
30.
31. for (int i = 0; i < n; i++)
32. a[i] = output[i];
33. }
34.
35. // function to implement radix sort
36. void radixsort(int a[], int n) {
37.
38. // get maximum element from array
39. int max = getMax(a, n);
40.
41. // Apply counting sort to sort elements based on place value
42. for (int place = 1; max / place > 0; place *= 10)
43. countingSort(a, n, place);
44. }
45.
46. // function to print array elements
47. void printArray(int a[], int n) {
48. for (int i = 0; i < n; ++i) {
49. printf("%d ", a[i]);
50. }
51. printf("\n");
52. }
38
53.
54. int main() {
55. int a[] = {181, 289, 390, 121, 145, 736, 514, 888, 122};
56. int n = sizeof(a) / sizeof(a[0]);
57. printf("Before sorting array elements are - \n");
58. printArray(a,n);
59. radixsort(a, n);
60. printf("After applying Radix sort, the array elements are - \n");
61. printArray(a, n);
62. }

After the execution of the above code, the output will be –

39

You might also like