Chapter 2
Chapter 2
2.1. Searching
Searching is a process of looking for a specific element in a list of items or determining that the
item is not in the list. There are two simple searching algorithms:
Loop through the array starting at the first element until the value of target matches one of the
array elements.
Time is proportional to the size of input (n) and we call this time complexity O(n).
Example Implementation:
Page 1 of 6
• Locate midpoint of array to search
• Determine if target is in lower half or upper half of an array.
o If in lower half, make this half the array to search
o If in the upper half, make this half the array to search
• Loop back to step 1 until the size of the array to search is one, and this element does not
match, in which case return –1.
The computational time for this algorithm is proportional to log2 n. Therefore the time
complexity is O(log n)
Example Implementation:
• Insertion Sort
• Selection Sort
• Bubble Sort
Page 2 of 6
2.2.1. Insertion Sort
The insertion sort works just like its name suggests - it inserts each item into its proper place in
the final list. The simplest implementation of this requires two list structures - the source list and
the list into which sorted items are inserted. To save memory, most implementations use an in-
place sort that works by moving the current item past the already sorted items and repeatedly
swapping it with the preceding item until it is in place.
It's the most instinctive type of sorting algorithm. The approach is the same approach that you
use for sorting a set of cards in your hand. While playing cards, you pick up a card, start at the
beginning of your hand and find the place to insert the new card, insert it and move all the others
up one place.
Basic Idea:
Find the location for an element and move all others up, and insert the element.
1. The left most value can be said to be sorted relative to itself. Thus, we don’t need to do
anything.
2. Check to see if the second value is smaller than the first one. If it is, swap these two
values. The first two values are now relatively sorted.
3. Next, we need to insert the third value in to the relatively sorted portion so that after
insertion, the portion will still be relatively sorted.
4. Remove the third value first. Slide the second value to make room for insertion. Insert the
value in the appropriate position.
5. Now the first three are relatively sorted.
6. Do the same for the remaining items in the list.
Implementation
Page 3 of 6
Analysis
1+2+3+…+(n-1)= O(n2 )
1+2+3+…+(n-1)= O(n2 )
In-place algorithm
Basic Idea:
Implementation:
Page 4 of 6
Analysis
(n-1)+(n-2)+…+1= O(n2 )
n=O(n)
In-place algorithm
Basic Idea:
Loop through array from i=0 to n and swap adjacent elements if they are out of order.
Implementation:
void bubble_sort(list[])
{
int i,j,temp;
for(i=0;i<n; i++){
for(j=n-1;j>i; j--){
if(list[j]<list[j-1]){
temp=list[j];
list[j]=list[j-1];
list[j-1]=temp;
}//swap adjacent elements
}//end of inner loop
}//end of outer loop
}//end of bubble_sort
(n-1)+(n-2)+…+1= O(n2 )
Page 5 of 6
(n-1)+(n-2)+…+1= O(n2 )
Space?
In-place algorithm.
General Comments
Each of these algorithms requires n-1 passes: each pass places one item in its correct place. The
ith pass makes either i or n - i comparisons and moves. So:
or O(n2 ). Thus these algorithms are only suitable for small problems where their simple code
makes them faster than the more complex code of the O(n logn) algorithm. As a rule of thumb,
expect to find an O(n logn) algorithm faster for n>10 - but the exact value depends very much on
individual machines!.
Empirically it’s known that Insertion sort is over twice as fast as the bubble sort and is just as
easy to implement as the selection sort. In short, there really isn't any reason to use the selection
sort - use the insertion sort instead.
If you really want to use the selection sort for some reason, try to avoid sorting lists of more than
a 1000 items with it or repetitively sorting lists of more than a couple hundred items.
Page 6 of 6