Daa Endsem Paper Sol Unit I
Daa Endsem Paper Sol Unit I
Tech V Sem
Design and Analysis of Algorithms
Unit-I
2 Marks Questions
1. Difference Between Binary Tree , Complete Binary Tree and Full Binary Tree
(2017-18)
In binary tree every node contain at most 2 children. That means a node can have no
children or one children or two children.
In a complete binary tree all levels except the last are completely filled, and in the last level
is filled left to right up to a point. This means that all nodes have two children except the
nodes at the last two levels. At the lowest level the nodes have (by definition) zero children,
and at the last but one level can have 0, 1 or 2 children
In a full binary tree all nodes have either 0 or 2 children. Both types of nodes can appear at all
levels in the tree
Not every full binary tree is a complete binary tree. in a full binary tree leaves can appear
at any level, not just the last two, and the last level does not need to be filled from left to
right without leaving gaps.
Not every complete binary tree is a full binary tree. There is a node with just one child.
However, in a complete binary tree there will always be at most one node that does not
satisfy the definition for full binary tree.
In Dynamic Programming, we make a choice at each step, but the choice usually
depend on the solutions to the subproblems. Consequently we solve dynamic
programming problems in a bottom up manner, progressing from smaller subproblems
to larger subproblems.
In a greedy algorithm, we make whatever choice seems best at the moment and then
solves the subproblem arising after the choice is made. Thus, unlike dynamic
programming, which solves the subproblems bottom up, a greedy strategy usually
progresses in a top down fashion, making one greedy choice after another.
Not all optimization problems can be solved with greedy approach. For eg., Fractional
Knapsack problem can be solved using greedy approach, whereas, we cannot solve 0-
1 knapsack using greedy approach.
Here a = 4, b = 2 , f(n) = n2 , p = 0, k = 2
logba = log24 = 2 = k, so case 2 applies.
Therefore, T(n) = (n2lg n)
4. Name the sorting algorithm that is most practically used and also write its Time
Complexity. (2017-18)
Quicksort is one of the most efficient sorting algorithms, and this makes of it one of
the most used as well. The first thing to do is to select a pivot number, this number
will separate the data, on its left are the numbers smaller than it and the greater
numbers on the right. With this, we got the whole sequence partitioned. After the data
is partitioned, we can assure that the partitions are oriented, we know that we have
bigger values on the right and smaller values on the left. The quicksort uses this divide
and conquer algorithm
Space complexity is sometimes ignored because the space used is minimal and/or
obvious, but sometimes it becomes as important an issue as time.
10. Explain Heap Sort on the array. Illustrate the operation of Heap Sort on the
array A = {6, 14, 3, 25, 2, 10, 20, 7, 6} (2017-18)
Heap
6
14 3
25
2 10 20
7 6 25
14 20
25 2
0
7 2 10 3
1 2
4 10 3
Now execute lines 2-5
6 6
7 6
6 20
20 14 10
14
7 2 10 3 7 2 6 3
6 25 6 25
6 14
10 7 10
14
7 2 6 2
6 3
6 3
20 25 20 25
7 10
7 6
6 2
6 14 6 2
3 14
20 25
20 25
3 7
6 6 6
7
6 2
10 14 3 2
10 14
20 25 20 25
2 6
6 6
6 3
3 7
10 14 2 7 10 14
20 25 20 25
3 6
2 6
6 7 10 14
6 7 10 14
20 25
20 25
3 6
6 7 10 14
20 25
The array is sorted A=[2, 3, 6, 6, 7, 10, 14, 20, 25]
11. The recurrence T(n) = 7T(n/2) +n2 describe the running time of an algorithm A.
Another competing algorithm B has a running time of T’(n) = aT’(n/4) + n2. What
is the largest integer value for ‘a’ so that B is asymptotically faster than A.
(2017-18)
Algorithm A T(n) = 7T(n/2) +n2. Here a=7, b=2 p=0 and k=2
For B to be faster than A, the largest possible value of ‘a’ should be such that log4 a
Is less than 2.807
So, equating log27 = log4 a
At ‘a’ = 49 both A and B will have same growth. So, the largest integer value for a = 48,
B will be faster than A
The recursion tree is full for log1/(1−α) n levels, each contributing cn, so we guess
Ω(n log1/(1−α) n) = Ω(n lg n).
It has log1/α n levels, each contributing ≤ cn, so we guess O(n log1/α n) = O(n lg n)
2. The recurrence T(n) = 7T(n/3) +n2 describe the running time of an algorithm
A. Another competing algorithm B has a running time of S(n) = aS(n/9) + n2.
What is the smallest integer value for ‘a’ so that A is asymptotically faster
than B.
So, the smallest value of ‘a’ for A to be faster than B is 81 because any value less
than 81 will make time complexities of A and B of the order of Θ(n2)
45
23
23 18
9 18
Build-Max-Heap
17 5 9 1
45 5 9 1
9 6
17 6
6 23
23 18 17 18
Max-Heapify (A,1)
17 5 9 1 9 5 9 1
9 45 6 45
6 18
18 17 9
17 Max-Heapify (A,1)
9 5 9 1 9 5 6 1
17
1
9 9
17 9
Max-Heapify (A,1)
1 5 6 18
9 5 6 18
23 45
23 45
6 9
9 6 9
9 Max-Heapify (A,1)
1 18 1 5 17 18
5 17
23 45 23 45
9
5
6 5
6 9
Max-Heapify (A,1)
1 9 17 18
1 9 17 18
23 45
23 45
6
1
1 5
6 5 Max-Heapify (A,1)
9 9 17 18
9 9 17 18
23 45
23 45
5
5
Max-Heapify (A,1) 1 6
1 6
9 9 17 18
9 9 17 18
23 45
23 45
5 6
9 9 17 18
23 45
2019-20
4. How do you compare the performance of various algorithms? 2020-21,2021-
22
Algorithms that are definite and effective are also called computational
procedures. as an algorithm is executed, it uses the cpu time and memory to hold
the program and data. Analysis refer to the task of determining how much
In general, the time taken by an algorithm grows with the size of the input, so the
running time of a program is described as the function of the size of its input.
o Worst case running time : It is an upper bound on the running time for any
input. Knowing it gives us a guarantee that the algorithm will never take any
longer time. (At most).
o Average case running time : It is an estimate of the running time for an
average input. In average case analysis, we take all possible inputs and
calculate computing time for all of the inputs. Sum all the calculated values
and divide the sum by total number of inputs.
o Best case running time : It is the function defined by the minimum number
of steps taken on any instance of size n (at least). In the best case analysis, we
calculate lower bound on running time of an algorithm. We must know the
case that causes minimum number of operations to be executed.
o Amortized running time : Here the time required to perform a sequence of
related operations is averaged over all the operations performed. It guarantees
the average performance of each operation in the worst case.
n+10 < 10n < 100n < n2logn < n2.5 < 2n
Here a=2, b=2 and f(n) = n2 +2n+1. So f(n) = O(n2) for n>1 an c=4
7. Prove that worst case running time of any comparison sort is (nlogn)
Any decision tree that sorts n elements has height Ω(n lg n).
Proof:
There are n! Possible outcomes. Thus there are at least n! leaves in the decision
tree.
8. Among Merge sort, Insertion sort and Quick sort which one is the best in
worst case. Use that algorithm to sort the list E,X,A,M,P,L,E is alphabetic
order. 2021-22.
a. If p < r
b. Then q (p+r)/2
c. Merge-sort(a,p,q)
d. Merge-sort(a,q+1,r)
e. Merge(a,p,q,r)
E, X, A, M, P, L, E
E X A M P L E
E X A M P L E
E X. A M P L E
E A X M P E L
A E X E L M P
A E E L M P X
cn/8 cn 49/64
cn/4 cn/8 cn/16 cn/16 cn/32 cn/16 cn/32 cn/64
log2n
. . . . . . . . .
(1) (1) (1) (1) (1) (1) (1) (1) (1)
2020-21
• The portion of the definition that does not contain t is called the base case of
the recurrence relation; the portion that contains t is called the recurrent or
recursive case.
• Recurrence relations are useful for expressing the running times (i.e., The
number of basic operations executed) of recursive algorithms
Master’s Method can be used to solve the recurrences of the form
T(n) = a T(n/b) + f(n)
• We look at input sizes large enough to make only the order of growth of the
running time function relevant.
• That is, we are concerned with how the running time of an algorithm increases
in the limit with the size of the input as the size of the input increases without
bound.
• Usually an algorithm that is asymptotically more efficient will be the best
choice for all but very small inputs.
• Asymptotic complexity defines the running time of an algorithm as a function
of input size n for large n. It is expressed using only the highest-order term in
the expression for the exact running time.
• Θ, Ω, O, , o, ⍵
• Defined for functions over the natural numbers.
• Ex: f(n) = Q(n2).
• Describes how f(n) grows in comparison to n2.
• Define a set of functions; in practice used to compare two function sizes.
• The notations describe different rate-of-growth relations between the
defining function and the defined set of functions.
notation
For function g(n), we define Ω(g(n)), big-omega of n, as the set: Ω(g(n)) = {f(n) :
positive constants c and n0, such that we have. 0 ≤ cg(n) ≤ f(n) ∀ n ≥ n0}
• Set of all functions whose rate of growth is the same as or higher than that of
g(n). g(n) is an asymptotic lower bound for f(n).
C=[2,4,6,9,11,12] for i 1 to k
C[i]=C[i]+C[i-1]
A[i] 4 0 2 0 1 3 5 4 1 3 2 3
i 1 2 3 4 5 6 7 8 9 10 11 12
B[i] 0 0 1 1 2 2 3 3 3 4 4 5
C=[2,4,6,9,11,12]
INSERTION SORT
Cost No. of Times
1 For i ← 2 to length[A] C1 n
2 { key ← A[ i ] C2 n-1
3 j←i–1 C3 n-1
5 while j > 0 and A[j ] > key C4 i=2 to n ∑ ti
6 do {a[j + 1] ← A[j ] C5 i=2 to n ∑ (ti -1)
7 j ←j – 1} C6 i=2 to n ∑ (ti -1)
8 A[j + 1] ← key } C7 n-1
ti: number of times the while statement is executed at each iteration of i
Total Cost T(n) = C1n + C2(n-1) + C3(n-1) + C4 ∑ ti + C5 ∑ (ti − 1) +
C6∑ (ti − 1) + C7(n-1)
Best Case:
o Input array is already sorted. Then for each i = 2,3,….,N , key >= a[j] when j
has its initial value of i-1.
o Thus ti = 0 for i = 2,3,….,N.
o T(n)= c1n + c2(n-1) + c3(n-1)+ c4(n-1) + c5 *0 + c6*0 + c7(n-1)
o T(n) = (c1 + c2 + c4 + c5 + c8)n + (c2 + c4 + c5 + c8)
o T(n) = an + b = θ(n). T(n) is a linear function of n.
c.n2 c.n2
n
Log4 Suppose n = 4k
2 2 2
c(n/16)2 c(n/16)2 c(n/16) c(n/16)2 c(n/16)2 c(n/16) c(n/16) c(n/16) c(n/16)
2 2
(3/16)2.c.n2
(3/42)k-1 cn2
T(n/4k) T(n/4k) T(n/4k) T(n/4k) T(n/4k) T(n/4k) T(n/4k) T(n/4k)
T(1) T(1) T(1) T(1) T(1) T(1) T(1) T(1) T(1) T(1) T(1) T(1)
3 3 3
T ( n) = cn 2 + cn 2 + ( ) 2 cn 2 + ... + ( ) log 4 n -1 cn 2 + ( n log 4 3 )
16 16 16
log 4 n -1
3 i 2
= å
i =0
(
16
) cn + ( n log 4 3 )
¥
3 i 2
å( ) cn + ( n log 4 3 )
i =0 16
1 16 2
= cn 2 + ( n log 4 3 ) = cn + ( n log 4 3 )
3 13
1-
16
= O(n 2 )
17. Write Merge sort and use it to sort {23,11,5,15,68,31,4,17} Ans 8 [2019-20]
23, 11 5, 15 68, 31 4, 17
23 11 5 15 68 31 4 17
11, 23 5, 15 31, 68 4, 17
A sorting algorithm is said to be stable if two objects with equal keys appear in
the same order in sorted output as they appear in the input data set.
Informally, stability means that equivalent elements retain their relative positions,
after sorting.
Example :- Insertion, Merge Counting sort
25 Build Max-Heap 91
57 48
57 86
36 12 91 86
36 12 48 25
2
2
Exchange A[1] and A[8] and reduce the Heap size by 1
2 86
Max-Heapify(A,1)
57 86 48
57
36 12 48 25 36 25
12 2
91 91
25
57
57 48
Max-Heapify(A,1) 36 48
36 12 2 86
25 12 2 86
91
91
2 48
48 36 2
36 Max-Heapify(A,1)
25 86 25 12 57 86
12 57
91 91
25 2
36 2
Max-Heapify(A,1)
12 48 57 86
25 48 57 86
91
91
Exchange A[1] and A[4] and reduce the Heap size by 1
12 25
25 2 Max-Heapify(A,1) 12 2
36 48 57 86 36 48 57 86
91 91
2 12
Max-Heapify(A,1)
12 25 2 25
36 48 57 86 36 48 57 86
91 91
36 48 57 86
91