0% found this document useful (0 votes)
19 views

Ada Unit-1,2

Uploaded by

Himil Prajapati
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Ada Unit-1,2

Uploaded by

Himil Prajapati
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 130

Computer Algorithm (3150703)

UNIT 1

Presented by :
Prof Ghanshyam I Prajapati
Dept of CE & IT
SVMIT-Bharuch
Contents
• Introduction to Algorithms
• Example of Bubble Sort
• Three cases of Algorithm
• Amortized Analysis
• Finding time approximation for a give
code/program/algorithm
• Asymptotic Notation
• Sorting Algorithms
• Sorting in Linear Time (Bucket, Radix and
Counting sort)
2
Algorithms

• The word algorithm comes from the name of a Persian


mathematician Abu Ja’far Mohammed ibn-i Musa al
Khowarizmi.

• In computer science, this word refers to a special


method useable by a computer for solution of a problem.
The statement of the problem specifies in general terms
the desired input/output relationship.

• For example, sorting a given sequence of numbers into


nondecreasing order provides fertile ground for
introducing many standard design techniques and
analysis tools.
Some definitions of Algorithm

• Algorithm is a collection of sequential computational


steps for solving a given problem.

• Algorithm is any well-defined computational procedure


that takes some value or set of values, as input and
produces some value or set of values, as output.

• Algorithm is a sequence of computational steps that


transform the input into the output.

4
Where We're Going
• Learn general approaches to algorithm design
– Divide and conquer
– Greedy method
– Dynamic Programming
– Basic Search and Traversal Technique
– Graph Theory
– Backtracking
– Branch and Bound
– String Matching
– NP Problem
The problem of sorting
Bubble Sort
Bubble Sort is the simplest sorting algorithm that works by
repeatedly swapping the adjacent elements if they are in
wrong order.

There are mainly three flavors of bubble sort.

1. Simple

2. Semi-optimized

3. Optimized

7
(1) Simple Bubble sort:

First Pass:
( 5 1 4 2 8 ) –> ( 1 5 4 2 8 ), Here, algorithm compares the
first two elements, and swaps since 5 > 1.
( 1 5 4 2 8 ) –> ( 1 4 5 2 8 ), Swap since 5 > 4
( 1 4 5 2 8 ) –> ( 1 4 2 5 8 ), Swap since 5 > 2
( 1 4 2 5 8 ) –> ( 1 4 2 5 8 ), Now, since these elements
are already in order (8 > 5), algorithm does not swap them.

8
Second Pass:
( 1 4 2 5 8 ) –> ( 1 4 2 5 8 )
( 1 4 2 5 8 ) –> ( 1 2 4 5 8 ), Swap since 4 > 2
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )

Now, the array is already sorted, but our algorithm


does not know if it is completed. The algorithm needs one
whole pass without any swap to know it is sorted.

9
Third Pass:
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
Fourth Pass:
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )

This takes n-1 passes and each pass take n-1 comparisons
where n is number of elements. So the total number of
comparisons is (n-1)2
10
(2) Semi-optimized Bubble sort:

First Pass:
( 5 1 4 2 8 ) –> ( 1 5 4 2 8 ), Here, algorithm compares the
first two elements, and swaps since 5 > 1.
( 1 5 4 2 8 ) –> ( 1 4 5 2 8 ), Swap since 5 > 4
( 1 4 5 2 8 ) –> ( 1 4 2 5 8 ), Swap since 5 > 2
( 1 4 2 5 8 ) –> ( 1 4 2 5 8 ), Now, since these elements
are already in order (8 > 5), algorithm does not swap them.

This pass takes 4 number of comparisons.

11
Second Pass:
( 1 4 2 5 8 ) –> ( 1 4 2 5 8 )
( 1 4 2 5 8 ) –> ( 1 2 4 5 8 ), Swap since 4 > 2
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )

This pass takes 3 number of comparisons.

12
Third Pass:
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )

This pass takes only two no of comparisons.

Fourth Pass:
( 1 2 4 5 8 ) –> ( 1 2 4 5 8 )

Last pass takes only one comparison.

So the total number of comparisons is (n*(n-1))/2 , if n=5


then the answer is 10.

13
(3) Optimized Bubble sort:

First Pass:
( 1 2 5 8 9 ) –> ( 1 2 5 8 9 ),
( 1 2 5 8 9 ) –> ( 1 2 5 8 9 ),
( 1 2 5 8 9 ) –> ( 1 2 5 8 9 ),
( 1 2 5 8 9 ) –> ( 1 2 5 8 9 ).

No exchange is made in this pass so optimized bubble


sort takes only one pass with four number of
comparisons.

14
Simple Bubble Sort Algorithm
Algorithm SimpleBubbleSort (A, N)
where A is an array of N elements

Step-1: Read A with N elements


Step-2: for i = 1 to N-1
for j = 1 to N-1
if(A[ j ] > A[ j+1 ]) then
temp := A[ j ]
A[ j ]=A[ j+1 ]
A[ j+1 ]=temp
(end of if structure)
(end of inner for loop)
(end of outer for loop)
Step-3 : Print the sorted sequence (values of array A)
Step-4 : Stop / Return / End of algorithm

15
Semi-Optimized Bubble Sort Algorithm

Algorithm SemiOptimizedBubbleSort (A, N)


where A is an array of N elements

16
Optimized Bubble Sort Algorithm

Algorithm OptimizedBubbleSort (A, N)


where A is an array of N elements

17
Insertion Sort
Example of Insertion Sort
Example of Insertion Sort
Example of Insertion Sort
Example of Insertion Sort
Example of Insertion Sort
Example of Insertion Sort
Example of Insertion Sort
Example of Insertion Sort
Example of Insertion Sort
Example of Insertion Sort
Example of Insertion Sort
Selection Sort

50 10 30 40 20

10 50 30 40 20

10 20 30 40 50

10 20 30 40 50

10 20 30 40 50

30
Analysis of algorithms

What’s more important than performance?


• modularity
• correctness
• maintainability
• functionality
• robustness
• user-friendliness
• programmer time
• simplicity
• extensibility
• reliability
Analysis of algorithms
Why study algorithms and performance?

• Algorithms help us to understand scalability.

• Performance often draws the line between what is feasible


and what is impossible.

• Algorithmic mathematics provides a language for talking


about program behavior.

• The lessons of program performance generalize to other


computing resources.

• Speed is fun !
Running Time

• Therunning time depends on the input: an already


sorted sequence is easier to sort.

• Parameterize the running time by the size of the


input, since short sequences are easier to sort than
long ones.

• Generally, we seek upper bounds on the running


time, because everybody likes a guarantee.
Kinds of analyses

Worst-case: (usually)
• T(n) = maximum time of algorithm on any input of
size n.

Average-case: (sometimes)
• T(n) = expected time of algorithm over all inputs of
size n.
• Need assumption of statistical distribution of inputs.

Best-case:
• Cheat with a slow algorithm that works fast on some
input.
An example

A pseudocode for insertion sort ( INSERTION SORT ).

INSERTION-SORT(A)
1 for j ← 2 to length [A]
2 do key ← A[ j]
3 ∇ Insert A[j] into the sortted sequence A[1,..., j-1].
4 i← j–1
5 while i > 0 and A[i] > key
6 do A[i+1] ← A[i]
7 i←i–1
8 A[i +1] ← key
Analysis of INSERTION-SORT(contd.)

INSERTION - SORT(A) cost times


1 for j ← 2 to length [ A] c1 n
2 do key ← A[ j ] c2 n −1
3 ∇ Insert A[ j ] into the sorted
sequence A[1 ⋅ ⋅ j − 1] 0 n −1
4 i ← j −1 c4 n −1
5 while i > 0 and A[i ] > key c5 ∑ n
t
j =2 j

6 do A[i + 1] ← A[i ] c6 ∑ n
(t
j =2 j
− 1)

7 i ← i −1 c7 ∑ n
(t
j =2 j
− 1)
8 A[i + 1] ← key c8 n −1
Analysis of INSERTION-SORT(contd.)

The total running time is

n n
T (n) = c1 + c2 (n − 1) + c4 (n − 1) + c5 ∑ t j + c6 ∑ (t j − 1)
j =2 j =2
n
+ c7 ∑ (t j − 1) + c8 (n − 1).
j =2
Analysis of INSERTION-SORT(contd.)

The best case: The array is already sorted.


(tj =1 for j=2,3, ...,n)

T ( n) = c1n + c2 ( n − 1) + c4 ( n − 1) + c5 ( n − 1) + c8 ( n − 1)

= (c1 + c2 + c4 + c5 + c8 )n − (c2 + c4 + c5 + c8 ).
Analysis of INSERTION-SORT(contd.)

•The worst case: The array is reverse sorted


(tj =j for j=2,3, ...,n).

n n(n + 1)
∑j=
j =1 2

T ( n) = c1n + c2 ( n − 1) + c5 ( n(n + 1) / 2 − 1)
+ c6 ( n( n − 1) / 2) + c7 ( n( n − 1) / 2) + c8 ( n − 1)
= (c5 / 2 + c6 / 2 + c7 / 2)n 2 + (c1 + c2 + c4 + c5 / 2 − c6 / 2 − c7 / 2 + c8 )n

T (n) = an 2 + bn + c
Analysis of Algorithms
• An algorithm is a finite set of precise instructions
for performing a computation or for solving a
problem.
• What is the goal of analysis of algorithms?
– To compare algorithms mainly in terms of running
time but also in terms of other factors (e.g., memory
requirements, programmer's effort etc.)
• What do we mean by running time analysis?
– Determine how running time increases as the size
of the problem increases.

40
Input Size
• Input size (number of elements in the input)
– size of an array

– polynomial degree

– # of elements in a matrix

– # of bits in the binary representation of the input

– vertices and edges in a graph

41
Types of Analysis
• Worst case
– Provides an upper bound on running time
– An absolute guarantee that the algorithm would not run longer,
no matter what the inputs are
• Best case
– Provides a lower bound on running time
– Input is the one for which the algorithm runs the fastest

Lower Bound ≤ Running Time ≤ Upper Bound


• Average case
– Provides a prediction about the running time
– Assumes that the input is random
42
How do we compare algorithms?
• We need to define a number of objective
measures.

(1) Compare execution times?


Not good: times are specific to a particular
computer !!

(2) Count the number of statements executed?


Not good: number of statements vary with
the programming language as well as the
style of the individual programmer.
43
Ideal Solution

• Express running time as a function of the


input size n (i.e., f(n)).
• Compare different functions corresponding
to running times.
• Such an analysis is independent of
machine time, programming style, etc.

44
Example
• Associate a "cost" with each statement.
• Find the "total cost“ by finding the total number of times
each statement is executed.
Algorithm 1 Algorithm 2

Cost Cost
arr[0] = 0; c1 for(i=0; i<N; i++) c2
arr[1] = 0; c1 arr[i] = 0; c1
arr[2] = 0; c1
... ...
arr[N-1] = 0; c1
----------- -------------
c1+c1+...+c1 = c1 x N (N+1) x c2 + N x c1 =
(c2 + c1) x N + c2

45
Another Example

• Algorithm 3 Cost
sum = 0; c1
for(i=0; i<N; i++) c2
for(j=0; j<N; j++) c2
sum += arr[i][j]; c3
------------
c1 + c2 x (N+1) + c2 x N x (N+1) + c3 x N2

46
Asymptotic Analysis
• To compare two algorithms with running
times f(n) and g(n), we need a rough
measure that characterizes how fast
each function grows.
• Hint: use rate of growth
• Compare functions in the limit, that is,
asymptotically!
(i.e., for large values of n)

47
Rate of Growth
• Consider the example of buying elephants and
goldfish:
Cost: cost_of_elephants + cost_of_goldfish
Cost ~ cost_of_elephants (approximation)
• The low order terms in a function are relatively
insignificant for large n
n4 + 100n2 + 10n + 50 ~ n4

i.e., we say that n4 + 100n2 + 10n + 50 and n4


have the same rate of growth

48
O-notation

• For a given function g (n) , we denote by O ( g (n)) the set


of functions
 f ( n) : there exist positive constants c and n0 s.t.
O( g (n)) =  
 0 ≤ f ( n ) ≤ cg ( n ) for all n ≥ n 0 

• We use O-notation to give an asymptotic upper bound of


a function, to within a constant factor.
• f (n) = O( g (n)) means that there existes some constant c
s.t. f (n) is always ≤ cg (n) for large enough n.
Ω-Omega notation

• For a given function g (n) , we denote by Ω( g (n)) the


set of functions

 f (n) : there exist positive constants c and n0 s.t.


Ω( g (n)) =  
 0 ≤ cg (n) ≤ f (n) for all n ≥ n0 
• We use Ω-notation to give an asymptotic lower bound on
a function, to within a constant factor.
• f (n) = Ω( g (n)) means that there exists some constant c s.t.
f (n) is always ≥ cg (n) for large enough n.
Θ -Theta notation
• For a given function g (n), we denote by Θ( g (n)) the set
of functions

 f (n) : there exist positive constants c1 , c2 , and n0 s.t.


Θ( g (n)) =  
 0 ≤ c1 g (n) ≤ f (n) ≤ c2 g (n) for all n ≥ n0 

• A function f (n) belongs to the set Θ( g (n)) if there exist


positive constants c1 and c2 such that it can be “sand-
wiched” between c1 g (n) and c2 g (n) or sufficienly large n.
• f (n) = Θ( g (n)) means that there exists some constant c1
and c2 s.t. c1 g (n) ≤ f (n) ≤ c 2 g (n) for large enough n.
Asymptotic Notation

• O notation: asymptotic “less than”:

– f(n)=O(g(n)) implies: f(n) “≤” g(n)

• Ω notation: asymptotic “greater than”:

– f(n)= Ω (g(n)) implies: f(n) “≥” g(n)

• Θ notation: asymptotic “equality”:

– f(n)= Θ (g(n)) implies: f(n) “=” g(n)

52
Big-O Notation

• We say fA(n)=30n + 8 is order n, or O (n)


It is, at most, roughly proportional to n.

• fB(n)=n2 + 1 is order n2, or O(n2). It is, at most,


roughly proportional to n2.

• In general, any O(n2) function is faster-


growing than any O(n) function.

53
Visualizing Orders of Growth
• On a graph, as
you go to the
right, a faster
growing

Value of function →
function fA(n)=30n+8
eventually
becomes
larger... fB(n)=n2+1

Increasing n →

54
More Examples …

• n4 + 100n2 + 10n + 50 is O(n4)


• 10n3 + 2n2 is O(n3)
• n3 - n2 is O(n3)
• constants
– 10 is O(1)
– 1273 is O(1)

55
Back to Our Example
Algorithm 1 Algorithm 2
Cost Cost
arr[0] = 0; c1 for(i=0; i<N; i++) c2
arr[1] = 0; c1 arr[i] = 0; c1
arr[2] = 0; c1
...
arr[N-1] = 0; c1
----------- -------------
c1+c1+...+c1 = c1 x N (N+1) x c2 + N x c1 =
(c2 + c1) x N + c2

• Both algorithms are of the same order: O(N)

56
Example (cont’d)

Algorithm 3 Cost
sum = 0; c1
for(i=0; i<N; i++) c2
for(j=0; j<N; j++) c2
sum += arr[i][j]; c3
------------
c1 + c2 x (N+1) + c2 x N x (N+1) + c3 x N2 = O(N2)

57
Asymptotic notations
• O-notation

58
Big-O Visualization

O(g(n)) is the set of


functions with smaller
or same order of
growth as g(n)

59
Examples

– 2n2 = O(n3): 2n2 ≤ cn3 ⇒ 2 ≤ cn ⇒ c = 1 and n0= 2

– n2 = O(n2): n2 ≤ cn2 ⇒ c ≥ 1 ⇒ c = 1 and n0= 1

– 1000n2+1000n = O(n2):

1000n2+1000n ≤ 1000n2+ n2 =1001n2⇒ c=1001 and n0 = 1000

– n = O(n2): n ≤ cn2 ⇒ cn ≥ 1 ⇒ c = 1 and n0= 1

60
More Examples
• Show that 30n+8 is O(n).
– Show ∃c,n0: 30n+8 ≤ cn, ∀n>n0 .
• Let c=31, n0=8. Assume n>n0=8. Then
cn = 31n = 30n + n > 30n+8, so 30n+8 < cn.

61
Big-O example, graphically
• Note 30n+8 isn’t
less than n
anywhere (n>0).
cn =
• It isn’t even
31n 30n+8

Value of function →
less than 31n
everywhere.
• But it is less than 30n+8
31n everywhere to n
the right of n=8. ∈O(n)
n>n0=8 →
Increasing n →

62
No Uniqueness
• There is no unique set of values for n0 and c in proving the
asymptotic bounds

• Prove that 100n + 5 = O(n2)


– 100n + 5 ≤ 100n + n = 101n ≤ 101n2

for all n ≥ 5

n0 = 5 and c = 101 is a solution

– 100n + 5 ≤ 100n + 5n = 105n ≤ 105n2


for all n ≥ 1

n0 = 1 and c = 105 is also a solution


Must find SOME constants c and n0 that satisfy the asymptotic notation relation
63
Asymptotic notations (cont.)
• Ω - notation

Ω(g(n)) is the set of functions


with larger or same order of
growth as g(n)

64
Examples
– 5n2 = Ω(n)
∃ c, n0 such that: 0 ≤ cn ≤ 5n2 ⇒ cn ≤ 5n2 ⇒ c = 1 and n0 = 1

– 100n + 5 ≠ Ω(n2)
∃ c, n0 such that: 0 ≤ cn2 ≤ 100n + 5
100n + 5 ≤ 100n + 5n (∀ n ≥ 1) = 105n
cn2 ≤ 105n ⇒ n(cn – 105) ≤ 0
Since n is positive ⇒ cn – 105 ≤ 0 ⇒ n ≤ 105/c
⇒ contradiction: n cannot be smaller than a constant
– n = Ω(2n), n3 = Ω(n2), n = Ω(logn)
65
Asymptotic notations (cont.)
• Θ-notation

Θ(g(n)) is the set of functions


with the same order of growth
as g(n)

66
Examples
– n2/2 –n/2 = Θ(n2)

• ½ n2 - ½ n ≤ ½ n2 ∀n ≥ 0 ⇒ c2= ½

• ½ n2 - ½ n ≥ ½ n2 - ½ n * ½ n ( ∀n ≥ 2 ) = ¼ n2

⇒ c1= ¼

– n ≠ Θ(n2): c1 n2 ≤ n ≤ c2 n2

⇒ only holds for: n ≤ 1/c1

67
Examples
– 6n3 ≠ Θ(n2): c1 n2 ≤ 6n3 ≤ c2 n2

⇒ only holds for: n ≤ c2 /6

– n ≠ Θ(logn): c1 logn ≤ n ≤ c2 logn

⇒ c2 ≥ n/logn, ∀ n≥ n0 – impossible

68
Relations Between Different Sets
• Subset relations between order-of-growth sets.

R→R
O( f ) Ω( f )
•f
Θ( f )

69
Common orders of magnitude

70
Common orders of magnitude

71
Logarithms and properties
• In algorithm analysis we often use the notation “log n”
without specifying the base

Binary logarithm lg n = log2 n log x y = y log x


Natural logarithm ln n = loge n log xy = log x + log y
x
lg k n = (lg n ) k log = log x − log y
y
lg lg n = lg(lg n ) log a
a logb x = x b

log b x = log a x
log a b

72
More Examples
• For each of the following pairs of functions, either f(n) is
O(g(n)), f(n) is Ω(g(n)), or f(n) = Θ(g(n)). Determine
which relationship is correct.
– f(n) = log n2; g(n) = log n + 5 f(n) = Θ (g(n))
– f(n) = n; g(n) = log n2 f(n) = Ω(g(n))
– f(n) = log log n; g(n) = log n f(n) = O(g(n))
– f(n) = n; g(n) = log2 n f(n) = Ω(g(n))
– f(n) = n log n + n; g(n) = log n f(n) = Ω(g(n))
– f(n) = 10; g(n) = log 10 f(n) = Θ(g(n))
– f(n) = 2n; g(n) = 10n2 f(n) = Ω(g(n))
– f(n) = 2n; g(n) = 3n f(n) = O(g(n))
73
Properties
• Theorem:
f(n) = Θ(g(n)) ⇔ f = O(g(n)) and f = Ω(g(n))
• Transitivity:
– f(n) = Θ(g(n)) and g(n) = Θ(h(n)) ⇒ f(n) = Θ(h(n))
– Same for O and Ω
• Reflexivity:
– f(n) = Θ(f(n))
– Same for O and Ω
• Symmetry:
– f(n) = Θ(g(n)) if and only if g(n) = Θ(f(n))
• Transpose symmetry:
– f(n) = O(g(n)) if and only if g(n) = Ω(f(n))
74
Asymptotic Notations in Equations
• On the right-hand side
– Θ(n2) stands for some anonymous function in Θ(n2)
2n2 + 3n + 1 = 2n2 + Θ(n) means:
There exists a function f(n) ∈ Θ(n) such that
2n2 + 3n + 1 = 2n2 + f(n)
• On the left-hand side
2n2 + Θ(n) = Θ(n2)
No matter how the anonymous function is chosen on
the left-hand side, there is a way to choose the
anonymous function on the right-hand side to make
the equation valid.

75
Common Summations
n
n( n + 1)
• Arithmetic series: ∑ k = 1 + 2 + ... + n =
k =1 2
n
x n +1 − 1
• Geometric series: ∑ x = 1 + x + x + ... + x =
k 2 n
(x ≠ 1)
k =0 x −1

1
– Special case: |x| < 1: ∑x
k =0
k
=
1− x
n
1 1 1
• Harmonic series: ∑
k =1 k
= 1 +
2
+ ... +
n
≈ ln n

n
• Other important formulas: ∑ lg k ≈ n lg n
k =1

n
1
∑ k p = 1p + 2 p + ... + n p ≈
k =1 p +1
n p +1

76
Mathematical Induction
• A powerful, rigorous technique for proving that a
statement S(n) is true for every natural number n,
no matter how large.

• Proof:

– Basis step: prove that the statement is true for n = 1

– Inductive step: assume that S(n) is true and prove that

S(n+1) is true for all n ≥ 1

• Find case n “within” case n+1


77
Example
• Prove that: 2n + 1 ≤ 2n for all n ≥ 3
• Basis step:
– n = 3: 2 ∗ 3 + 1 ≤ 23 ⇔ 7 ≤ 8 TRUE
• Inductive step:
– Assume inequality is true for n, and prove it for (n+1):
2n + 1 ≤ 2n must prove: 2(n + 1) + 1 ≤ 2n+1
2(n + 1) + 1 = (2n + 1 ) + 2 ≤ 2n + 2 ≤
≤ 2n + 2n = 2n+1, since 2 ≤ 2n for n ≥ 1

78
Sorting in Linear Time

• Counting Sort

• Radix Sort

• Bucket sort

79
Radix Sort
• 34,12, 56, 45,10, 22
0 10
1
2 12 22
3
4 34
5 45
6 56
7
8
9
10, 12, 22, 34, 45, 56

80
10, 12, 22, 34, 45, 56

0
1 10 12
2
22
3
34
4
45
5 56
6
7
8
9

81
Bucket Sort
• Bucket sort is mainly useful when input is
uniformly distributed over a range.

• Time complexity for all three cases is O(n) ie


linear time.

• Example : All the following numbers are


uniformly distributed over [0,1].
0.78,0.17,0.39,0.26,0.72,0.94,0.21,0.12,0.23,0.68

82
83
84
Counting Sort

85
1 2 3 4 5 6 7 8
A= 2 5 3 0 2 3 0 3

0 1 2 3 4 5
C= 0 0 0 0 0 0

0 1 2 3 4 5
C= 2 0 2 3 0 1

0 1 2 3 4 5
C= 2 2 4 7 7 8

1 2 3 4 5 6 7 8
B=

86
1 2 3 4 5 6 7 8
A= 2 5 3 0 2 3 0 3

0 1 2 3 4 5
C= 0 0 0 0 0 0

0 1 2 3 4 5
C= 2 0 2 3 0 1

0 1 2 3 4 5
C= 0 2 2 4 7 7

1 2 3 4 5 6 7 8
B= 0 0 2 2 3 3 3 5

87
88
Heap Sort
• A heap is a data structure that stores a collection
of objects (with keys), and has the following
properties:

- Complete Binary tree


- Heap Order

89
The heap sort algorithm has two main steps :

i. The first major step involves transforming the


complete tree into a heap.

ii. The second major step is to perform the actual


sort by extracting the largest or lowest element
from the root and transforming the remaining
tree into a heap.

90
• Types of heap :
(1) Max Heap, and
(2) Min Heap

91
Max Heap Example

19

12 16

1 4 7

19 12 16 1 4 7

ArrayA
Min heap example

4 16

7 12 19

1 4 16 7 12 19

ArrayA
1-Max heap :

max-heap definition:
is a complete binary tree in which the value in
each internal node is greater than or equal to
the values in the children of that node.

Max-heap property:
The key of a node is ≥ than the keys of its children.
Max heap Operation
 A heap can be stored as an
array A.
 Root of tree is A[1]
 Left child of A[i] = A[2i]
 Right child of A[i] = A[2i + 1]
 Parent of A[i] = A[ i/2 ]

95
Build Max-heap
Heapify() Example

16

4 10

14 7 9 3

2 8 1

A= 16 4 10 14 7 9 3 2 8 1
Heapify() Example

16

4 10

14 7 9 3

2 8 1

A= 16 4 10 14 7 9 3 2 8 1
Heapify() Example

16

4 10

14 7 9 3

2 8 1

A= 16 4 10 14 7 9 3 2 8 1
Heapify() Example

16

14 10

4 7 9 3

2 8 1

A= 16 14 10 4 7 9 3 2 8 1
Heapify() Example

16

14 10

4 7 9 3

2 8 1

A= 16 14 10 4 7 9 3 2 8 1
Heapify() Example

16

14 10

4 7 9 3

2 8 1

A= 16 14 10 4 7 9 3 2 8 1
Heapify() Example

16

14 10

8 7 9 3

2 4 1

A= 16 14 10 8 7 9 3 2 4 1
Heapify() Example

16

14 10

8 7 9 3

2 4 1

A= 16 14 10 8 7 9 3 2 4 1
Heapify() Example

16

14 10

8 7 9 3

2 4 1

A= 16 14 10 8 7 9 3 2 4 1
Heap-Sort : sorting strategy:

1. Build Max Heap from unordered array;


2. Find maximum elementA[1];
3. Swap elements A[n] andA[1] :
now max element is at the end of the array! .
4. Discard node n from heap
(by decrementing heap-sizevariable).
5. New root may violate max heap property, but its
children are max heaps. Run max_heapify to fix this.
6. Go to Step 2 unless heap is empty.
Example with max heap
• 23, 14, 11, 86, 34
23 23

14 11 11
86

86 34 34
14

106
86

23 11

14 34

107
86

34 11

14 23

108
86

34 11

14 23

109
23

34 11

14 86

Resultant Array =
110
23

34 11

14 86

Resultant Array = 86
111
23

34 11

14

Resultant Array = 86
112
34

23 11

14

Resultant Array = 86
113
34

23 11

14

Resultant Array = 86
114
14

23 11

34

Resultant Array = 86
115
14

23 11

34

Resultant Array = 34 86
116
14

23 11

Resultant Array = 34 86
117
14

23 11

Resultant Array = 34 86
118
23

14 11

Resultant Array = 34 86
119
23

14 11

Resultant Array = 34 86
120
23

14 11

Resultant Array = 34 86
121
11

14 23

Resultant Array = 34 86
122
11

14 23

Resultant Array = 23 34 86
123
11

14

Resultant Array = 23 34 86
124
11

14

Resultant Array = 23 34 86
125
14

11

Resultant Array = 23 34 86
126
11

14

Resultant Array = 23 34 86
127
11

14

Resultant Array = 14 23 34 86
128
11

Resultant Array = 11 14 23 34 86
129
Resultant Array = 11 14 23 34 86

130

You might also like