0% found this document useful (0 votes)
60 views

Sorting and Hashing: By, Pankti Doshi

Quick sort is a recursive sorting algorithm that works by partitioning an array around a pivot element. It repeatedly selects a pivot and rearranges elements in the array such that all elements less than the pivot come before the pivot and all elements greater than the pivot come after. This partitions the array into two halves. Quick sort then calls itself recursively to sort the two sub-arrays. The algorithm runs in O(n log n) time on average.

Uploaded by

ritz mesh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views

Sorting and Hashing: By, Pankti Doshi

Quick sort is a recursive sorting algorithm that works by partitioning an array around a pivot element. It repeatedly selects a pivot and rearranges elements in the array such that all elements less than the pivot come before the pivot and all elements greater than the pivot come after. This partitions the array into two halves. Quick sort then calls itself recursively to sort the two sub-arrays. The algorithm runs in O(n log n) time on average.

Uploaded by

ritz mesh
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 89

Sorting and Hashing

By,
Pankti Doshi
Quick Sort
• Known as quick sort or partition exchange sort.
• Consider x as array of integers of which the n integers are to
be sorted.
Quick Sort
• Working:
– Choose an element a from a specific position
within the array (for example, a can be chosen as
the first element so that a =x[0]).
– Partition elements of x so that a is placed into
position j and the following conditions hold:
1. Each of the elements in positions 0 through j-1 is less
than or equal to a.
2. Each of the elements in positions j+1 through n-1 is
greater than or equal to a.
• If above two conditions holds for a particular
a and j, a is the jth smallest element of x.
Quick Sort Working
Original array 25 57 48 37 12 92 86 33
Place first element 25 at its proper 12 25 57 48 37 92 86 33
position
Repeat the process for next sub array 12 25 57 48 37 92 86 33
Place first element 57 of sub array at 12 25 48 37 33 57 92 86
proper position.
Repeat the process for next sub arrays 12 25 37 33 48 57 92 86
Repeat the process for next sub arrays 12 25 33 37 48 57 92 86
Repeat the process for next sub arrays 12 25 33 37 48 57 86 92
Final array 12 25 33 37 48 57 86 92

We can identify that quick sort is recursive.


Quicksort Algorithm
Given an array of n elements (e.g., integers):
• If array only contains one element, return
• Else
– pick one element to use as pivot.
– Partition elements into two sub-arrays:
• Elements less than or equal to pivot
• Elements greater than pivot
– Quicksort two sub-arrays
– Return results
Finalizing Algorithm for Quick Sort
• Algorithm will sort all the elements in an array x between
positions lb and ub (lb is the lower bound and ub is the upper
bound).
• If(lb>=ub)
return; // array is sorted

• Partition(x, lb, ub, j);


/* parition the elements of the sub array such that one of the
elements possible x[lb] is now at x[j[]( j is an output parameter).
x[i] <= x[j] for lb<= i < j
x[i]>=x[j] for j < i <=ub
x[j] is now at its final position */
Finalizing Algorithm for Quick Sort
• quick(x, lb, j-1);
// recursively sort the sub array between positions lb and j-1

• quick(x, j+1, ub);


// recursively sort the sub array between positions j+1 and ub

How partition will be done?


Suggest a method for implementing partition function.
Finalizing Algorithm for Quick Sort
How partition will be done?
• Use two pointers up and down, both initialized to upper
bound and lower bound of sub array respectively.
• At any point of time, each element in a position above up is
greater than or equal to a
• Each element in a position below down is less than or equal to
a.
Finalizing Algorithm for Quick Sort
How partition will be done?
• The two pointers up and down are moved towards each
other in the following fashion.
Example
We are given array of n integers to sort:
40 20 10 80 60 50 7 30 100
Pick Pivot Element
There are a number of ways to pick the pivot element. In this
example, we will use the first element in the array:

40 20 10 80 60 50 7 30 100
Partitioning Array
Given a pivot, partition the elements of the array
such that the resulting array consists of:
1. One sub-array that contains elements >= pivot
2. Another sub-array that contains elements < pivot

The sub-arrays are stored in the original data array.

Partitioning loops through, swapping elements


below/above pivot.
pivot_index = 0 40 20 10 80 60 50 7 30 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

down up
1. While data[down] <= data[pivot]
++down

pivot_index = 0 40 20 10 80 60 50 7 30 100

[0] [1] [2] [3] [4] [5] [6] [7] [ 8]

down up
1. While data[down] <= data[pivot]
++down

pivot_index = 0 40 20 10 80 60 50 7 30 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

down up
1. While data[down] <= data[pivot]
++down

pivot_index = 0 40 20 10 80 60 50 7 30 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

up
down
1. While data[down] <= data[pivot]
++down
2. While data[up] > data[pivot]
--up

pivot_index = 0 40 20 10 80 60 50 7 30 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

up
down
1. While data[down] <= data[pivot]
++down
2. While data[up] > data[pivot]
--up

pivot_index = 0 40 20 10 80 60 50 7 30 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

up
down
1. While data[down] <= data[pivot]
++down
2. While data[up] > data[pivot]
--up
3. If down < up
swap data[down] and data[up]

pivot_index = 0 40 20 10 80 60 50 7 30 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

down up
1. While data[down] <= data[pivot]
++down
2. While data[up] > data[pivot]
--up
3. If down < up
swap data[down] and data[up]

pivot_index = 0 40 20 10 30 60 50 7 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

up
down
1. While data[down] <= data[pivot]
++down
2. While data[up] > data[pivot]
--up
3. If down < up
swap data[down] and data[up]
4. While up > down, go to 1.

pivot_index = 0 40 20 10 30 60 50 7 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]


up
down
1. While data[down] <= data[pivot]
++down
2. While data[up] > data[pivot]
--up
3. If down < up
swap data[down] and data[up]
4. While up > down, go to 1.

pivot_index = 0 40 20 10 30 60 50 7 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]


up
down
1. While data[down] <= data[pivot]
++down
2. While data[up] > data[pivot]
--up
3. If down < up
swap data[down] and data[up]
4. While up > down, go to 1.

pivot_index = 0 40 20 10 30 60 50 7 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]


up
down
1. While data[down] <= data[pivot]
++down
2. While data[up] > data[pivot]
--up
3. If down < up
swap data[down] and data[up]
4. While up > down, go to 1.

pivot_index = 0 40 20 10 30 60 50 7 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

down up
1. While data[down] <= data[pivot]
++down
2. While data[up] > data[pivot]
--up
3. If down < up
swap data[down] and data[up]
4. While up > down, go to 1.

pivot_index = 0 40 20 10 30 60 50 7 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

down up
1. While data[down] <= data[pivot]
++down
2. While data[up] > data[pivot]
--up
3. If down < up
swap data[down] and data[up]
4. While up > down, go to 1.

pivot_index = 0 40 20 10 30 60 50 7 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

down up
1. While data[down]<= data[pivot]
++down
2. While data[up] > data[pivot]
--up
3. If down < up
swap data[down] and data[up]
4. While up > down, go to 1.

pivot_index = 0 40 20 10 30 7 50 60 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]


down up
1. While data[down] <= data[pivot]
++down
2. While data[up] > data[pivot]
--up
3. If down < up
swap data[down] and data[up]
4. While up > down, go to 1.

pivot_index = 0 40 20 10 30 7 50 60 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

down up
1. While data[down] <= data[pivot]
++down
2. While data[up] > data[pivot]
--up
3. If down < up
swap data[down] and data[up]
4. While up > down, go to 1.

pivot_index = 0 40 20 10 30 7 50 60 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

down up
1. While data[down] <= data[pivot]
++down
2. While data[up] > data[pivot]
--up
3. If down < up
swap data[down] and data[up]
4. While up > down, go to 1.

pivot_index = 0 40 20 10 30 7 50 60 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]


up
down
1. While data[down] <= data[pivot]
++down
2. While data[up] > data[pivot]
--up
3. If down < up
swap data[down] and data[up]
4. While up > down, go to 1.

pivot_index = 0 40 20 10 30 7 50 60 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]


up
down
1. While data[down] <= data[pivot]
++down
2. While data[up] > data[pivot]
--up
3. If down < up
swap data[down] and data[up]
4. While up > down, go to 1.

pivot_index = 0 40 20 10 30 7 50 60 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

down up
1. While data[down] <= data[pivot]
++down
2. While data[up] > data[pivot]
--up
3. If down < up
swap data[down] and data[up]
4. While up > down, go to 1.

pivot_index = 0 40 20 10 30 7 50 60 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

down up
1. While data[down] <= data[pivot]
++down
2. While data[up] > data[pivot]
--up
3. If down < up
swap data[down] and data[up]
4. While up > down, go to 1.

pivot_index = 0 40 20 10 30 7 50 60 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

down up
1. While data[down] <= data[pivot]
++down
2. While data[up] > data[pivot]
--up
3. If down < up
swap data[down] and data[up]
4. While up > down, go to 1.

pivot_index = 0 40 20 10 30 7 50 60 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

down up
1. While data[down] <= data[pivot]
++down
2. While data[up] > data[pivot]
--up
3. If down < up
swap data[down] and data[up]
4. While up > down, go to 1.
5. Swap data[up] and data[pivot_index]

pivot_index = 0 40 20 10 30 7 50 60 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

down up
1. While data[down] <= data[pivot]
++down
2. While data[up] > data[pivot]
--up
3. If down < up
swap data[down] and data[up]
4. While up > down, go to 1.
5. Swap data[up] and data[pivot_index]

pivot_index = 4 7 20 10 30 40 50 60 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

down up
Partition Result

7 20 10 30 40 50 60 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

<= data[pivot] > data[pivot]


Recursion: Quicksort Sub-arrays

7 20 10 30 40 50 60 80 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]

<= data[pivot] > data[pivot]


Quick Sort Example
Quick Sort Example
Quick Sort Example
Pseudocode for Partitioning Array
int partition(int x[], int down, int up)
{
int pivot, temp, t;
pivot=down
down=down+1; // a is the element whose final position is sought
while(down<up)
{
while (x[down]<= x[pivot] && down <up)
down++; // Move up the array
while(x[up]> x[pivot])
up--; // Move down the array
if(down < up) { // Interchange x[down] and x[up]
temp = x[down];
x[down]=x[up];
x[up]=temp;
} // end if
} // end while
temp=x[up];
X[up]=x[pivot];
X[pivot]=temp;
t=up;
return t;
Pseudocode for Quick sort
void quicksort(int x[], down, up)
{
int q=n;
if(up>down)
{
q=partition(x, down, up);
quicksort(x, down, q-1);
quicksort(x, q+1, up);
}
}
40 20 10 80 60 50 7 30 100

[0] [1] [2] [3] [4] [5] [6] [7] [8]


Merge Sort Algorithm
Merge sort – uses divide, conquer and combine paradigm.

1. Divide – means partitioning the n element array to be


sorted into two sub arrays of n/2 elements. If A is an array
of 0 or 1 element, then it is already sorted.

2. Conquer – means sorting the two sub-arrays recursively


using merge sort.

3. Combine – means merging the two sorted sub-arrays of


size n/2 to produce the sorted array of n elements.
Merge Sort Algorithm
Basic steps of a Merge Sort algorithm are as follows:

1. If the array is of length 0 or 1, then it is already sorted.

2. Otherwise, divide the unsorted array into two sub-arrays of about


half the size.

3. Use merge sort algorithm recursively to sort each sub array.

4. Merge the two sub arrays to form a single sorted list.


Merge Sort Algorithm
• The merge sort algorithm uses a function merge which
combines the sub-arrays to form a sorted array.

• While the merge sort algorithm recursively divides the


list into smaller lists, the merge algorithm conquers the
list to sort the elements in individual lists. Finally, the
smaller lists are merged to form one list
Merge Sort Algorithm
Merge Sort Algorithm
Merge Sort Algorithm – for merging
list
Merge Sort Algorithm
Merge Sort Algorithm

[0] [1] [2] [3] [4] [5] [6] [7]


9 39 45 81 18 27 72 90
1st Iteration 2nd Iteration
Beg=0 Beg=0
End=7 End=3
Mid=(0+7)/2=3 Mid=(0+3)/2=1
Merge_Sort(Arr,0,3) Merge_Sort(Arr,0,3)
Merge_Sort(Arr,4,7) Merge_Sort(Arr,2,3)
Merge(Arr,0,4,7) Merge(Arr,0,1,3)
Hashing

Two search algorithm:


1. Linear Search
2. Binary Search

Searching time:
1. Linear Search – O(n) – n number of elements
2. Binary Search – O(log n) – n number of elements

What if we want to perform the search operation in time proportional to


O(1)?

In other words, is there any way to search an array in constant time,


irrespective of its size?
Hashing

There are two solutions to this problem.


Let us take an example to explain the first solution. In a small company of
100 employees, each employee is assigned an Emp_ID in the range 0–99. To
store the records in an array, each employee’s Emp_ID acts as an index into
the array where the employee’s record will be stored as shown in Fig.
In this case, we can directly access the record of any employee, once we know his
Emp_ID, because the array index is the same as the Emp_ID number. But practically, this
implementation is hardly feasible.
Hashing
Let us assume that the same company uses a five-digit Emp_ID as the primary key. In this
case, key values will range from 00000 to 99999. If we want to use the same technique as
above, we need an array of size 100,000, of which only 100 elements will be used.

It is impractical to waste so much storage space just to ensure that each employee’s
record is in a unique and predictable location.
Hashing

• Whether we use a two-digit primary key (Emp_ID) or a five-digit key, there are just
100 employees in the company.
• Thus, we will be using only 100 locations in the array.

• Therefore, in order to keep the array size down to the size that we will actually
be using (100 elements), another good option is to use just the last two digits
of the key to identify each employee.

• For example, the employee with Emp_ID 79439 will be stored in the element of
the array with index 39.

• Emp_ID 12345 will have his record stored in the array at the 45th location.

• Requirement  need to convert a five digit key number to a two digit array
index. We need function, which will do the conversion.

• In this case : array is known as hash table and function which will do the
conversion is known as hash function.
Hash tables • Hash table is a data
structure in which keys
are mapped to array
positions by a hash
function.

• A value stored in a hash


table can be searched in
O(1) time by using a
hash function which
generates an address
from the key.

In a hash table, an element


with key k is stored at
index h(k) and not k. Hash
function h calculates the
index at which the element
with key k will be stored.
This process of mapping
the keys to appropriate
locations (or indices) in a
hash table is called hashing
Collision
Different Hash Function

• Division Method
• Multiplication Method
• Mid-Square Method
• Folding Method
Division Method

• Divide x by M, use the remainder obtained.


• Hash function h(x) = x mod M

Calculate the hash values of keys 1234 and 5462.


Solution Setting M = 97, hash values can be calculated as:
h(1234) = 1234 % 97 = 70
h(5642) = 5642 % 97 = 16
Hash Function

• Mathematical formula  applied to a key produces an index which can be


used as an index for the key in the hash table.

• Generates unique set of integers  within some range to reduce collision

• No hash function that completely eliminates collision can reduce number of


collision

• Properties of good hash function:


• Low cost –cost of executing hash function must be less compare to other
approaches (for example binary search or any other search)

• Determinism - Same hash value must get generated for given input value.

• Uniformity –Must map the keys as evenly as possible.


Multiplication Method
Step 1: Choose a constant A such that 0 < A < 1.
Step 2: Multiply the key k by A.
Step 3: Extract the fractional part of kA.
Step 4: Multiply the result of Step 3 by the size of hash table (m).
Hence, the hash function can be given as:
h(k) = I_m (kA mod 1) _|

where (kA mod 1) gives the fractional part of kA and m is the total number of
indices in the hash table.

Given a hash table of size 1000, map the key 12345 to an appropriate location
in the hash table.
Solution We will use A = 0.618033, m = 1000, and k = 12345
h(12345) = |_ 1000 (12345 ¥ 0.618033 mod 1) _|
h(12345) = |_1000 (7629.617385 mod 1) _|
h(12345) = |_1000 (0.617385) _|
h(12345) = |_617.385_|
h(12345) = 617
Mid Square Method
The mid-square method is a good hash function which works in two steps:
Step 1: Square the value of the key. That is, find k2.
Step 2: Extract the middle r digits of the result obtained in Step 1.

In the mid-square method, the same r digits must be chosen from all the keys. Therefore, the hash
function can be given as:

h(k) = s

where s is obtained by selecting r digits from k2.

Calculate the hash value for keys 1234 and 5642 using the mid-square method.
The hash table has 100 memory locations.

Solution Note that the hash table has 100 memory locations whose indices vary from 0 to 99.

This means that only two digits are needed to map the key to a location in the hash table, so r = 2.

When k = 1234, k2 = 1522756, h (1234) = 27


When k = 5642, k2 = 31832164, h (5642) = 21
Observe that the 3rd and 4th digits starting from the right are chosen.
Folding Method
The folding method works in the following two steps:

Step 1: Divide the key value into a number of parts. That is, divide k into parts
k1, k2, ..., kn, where each part has the same number of digits except the last
part which may have lesser digits than the other parts.

Step 2: Add the individual parts. That is, obtain the sum of k1 + k2 + ... + kn. The
hash value is produced by ignoring the last carry, if any
Given a hash table of 100 locations, calculate the hash value using folding
method for keys 5678, 321, and 34567.
Solution
Since there are 100 memory locations to address, we will break the key into parts
where each part (except the last) will contain two digits. The hash values can be
obtained as shown below:
Collisions

• Collisions occur when the hash function maps two different keys to the same
location.

• Obviously, two records cannot be stored in the same location.

• A method used to solve the problem of collision, also called collision resolution
technique

• The two most popular methods of resolving collisions are:


1. Open addressing
2. Chaining
Collision resolution by open addressing
• Also known as closed hashing.

• If collision occurred
• Computes new positions using a probe sequence and the next record is stored in
that position.

• Hash function contains two types of values : senitinel values (e.g. -1) and data values.

• Senitinel value indicated – no value is stored at location at present, however value can
be stored.

• When a key is mapped to a particular memory location, then the value it holds is
checked.

• If it contains a sentinel value, then the location is free and the data value can be stored
in it.

• If the location already has some data value stored in it, then other slots are examined
systematically in the forward direction to find a free slot.

• If even a single free location is not found, then we have an OVERFLOW condition.

• The process of examining memory locations in the hash table is called probing.
Collision resolution by open addressing

• Two ways of implementing open addressing:


• Linear Probing
• Quadratic Probing
Linear Probing – collision resolution technique

if a value is already stored at a location generated by h(k),


then the following hash function is used to resolve the
collision:
h(k, i) = [h’(k) + i] mod m

Where m is the size of the hash table, h’(k) = (k mod m), and i
is the probe number that varies from 0 to m–1.

Linear probing is known for its simplicity. When we have to


store a value, we try the slots: [h’(k)] mod m, [h’(k) + 1]mod
m, [h’(k) + 2]mod m, [h’(k) + 3]mod m, [h’(k) + 4]mod m,
[h’(k) + 5]mod m, and so no, until a vacant location is found.
Linear Probing – collision resolution technique
Consider a hash table of size
10.
Using linear probing, insert
the keys
72, 27, 36, 24, 63, 81, 92,
and 101 into the table.

Let h’(k) = k mod m, m = 10


Initially, the hash table can
be given as:
Linear Probing – collision resolution technique
Consider a hash table of size 10.
Using linear probing, insert the keys
72, 27, 36, 24, 63, 81, 92, and 101
into the table.
Linear Probing – collision resolution technique
Consider a hash table of size 10.
Using linear probing, insert the keys
72, 27, 36, 24, 63, 81, 92, and 101
into the table.
Linear Probing – collision resolution technique
Consider a hash table of size 10.
Using linear probing, insert the keys
72, 27, 36, 24, 63, 81, 92, and 101
into the table.
Searching a value using Linear Probing
• Insert key into hash function.
• Hash function will compute array-index where
key is stored.
• If key is not available at that location, begin a
sequential search till
– The value is found or
– The search function encounters a vacant location
in the array, indicating value is not present or
– The search function terminates – as it reaches end
of array and value is not available.
Quadratic Probing
• If a value is already stored at a location generated by h(k),
then the following hash function is used to resolve the
collision:

• h(k, i) = [h’(k) + c1i + c2i2] mod m

• where m is the size of the hash table,


h’(k) = (k mod m),
i is the probe number that varies from 0 to m–1
c1 and c2 are constants such that c1 and c2 != 0.

• For a given key k, first the location generated by h’(k) mod m


is probed. If the location is free, the value is stored in it, else
subsequent locations probed are offset by factors that depend
in a quadratic manner on the probe number i
Quadratic Probing
Quadratic Probing
Quadratic Probing
Searching a value using Quadratic Probing
• Insert key into hash function.
• Hash function will compute array-index where
key is stored.
• If key is not available at that location, begin a
sequential search till
– The value is found or
– The search function encounters a vacant location
in the array, indicating value is not present or
– The search function terminates – as it reaches end
of array and value is not available.
Collision resolution by chaining
• In chaining, each location in a hash table stores a pointer to a
linked list that contains all the key values that were hashed to
that location.
Collision resolution by chaining
• While the cost of inserting a key in a chained hash table is
O(1), the cost of deleting and searching a value is given as
O(m) where m is the number of elements in the list of that
location.

• Searching and deleting takes more time because these


operations scan the entries of the selected location for the
desired key.
Code to initialize hash table
• Structure of the node

typedef struct node_HT


{
int value;
struct node *next;
}node;

Code to initialize:
void intitializehashtable(node *hashtable[], int m)
{
int I;
for(i=0;i<=m;i++)
hashtable[i]=NULL;
}
Code to insert a value into hash table
node *insert_value( node *hash_table[], int val)
{
node *new_node;
new_node = (node *)malloc(sizeof(node));
new_node-> value = val;
new_node-> next = val;
hashtable[h(x)] = new_node;
}
Code to search a value into hash table
node *search_value(node *hashtable[], int val)
{
node *ptr;
ptr = hashtable[h(x)];
while ( (ptr!=NULL) && (ptr –> value != val))
ptr = ptr –> next;
if (ptr–>value == val)
return ptr;
else
return NULL;
}
Code to delete a value from hash table
void delete_value (node *hashtable[], int val)
{
node *save, *ptr;
save = NULL;
ptr = hashtable[h(x)];
while ((ptr != NULL) && (ptr value != val))
{
save = ptr;
ptr = ptr next;
}
if (ptr != NULL)
{
save next = ptr next;
free (ptr);
}
else
printf("\n VALUE NOT FOUND");
}
Example of chained hash table
Insert the keys 7, 24, 18, 52, 36, 54, 11, and 23 in a chained hash table of 9 memory
locations.

Use h(k) = k mod m.

In this case, m=9. Initially, the hash table can be given as:

Step 1 Key = 7
h(k) = 7 mod 9
=7
Create a linked list for location 7 and
store the key value 7 in it as its only
node.
Example of chained hash table
Insert the keys 7, 24, 18, 52, 36, 54, 11, and 23 in a chained hash table of 9 memory
locations.

Use h(k) = k mod m.


Step 2 Key = 24
h(k) = 24 mod 9
=6
Create a linked list for location 6
and store the key value 24 in it as
its only node
Example of chained hash table
Insert the keys 7, 24, 18, 52, 36, 54, 11, and 23 in a chained hash table of 9 memory
locations.

Use h(k) = k mod m.


Example of chained hash table
Insert the keys 7, 24, 18, 52, 36, 54, 11, and 23 in a chained hash table of 9 memory
locations.

Use h(k) = k mod m.


Example of chained hash table
Insert the keys 7, 24, 18, 52, 36, 54, 11, and 23 in a chained hash table of 9 memory
locations.

Use h(k) = k mod m.

You might also like