0% found this document useful (0 votes)
39 views

Daa Endsem Paper Sol Unit I

The document discusses differences between binary trees, complete binary trees, and full binary trees. It also discusses the differences between greedy algorithms and dynamic programming, solving a recurrence relation using the master method, time and space complexity analysis, characteristics of algorithms, and sorting arrays using heap sort.

Uploaded by

13anil67
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views

Daa Endsem Paper Sol Unit I

The document discusses differences between binary trees, complete binary trees, and full binary trees. It also discusses the differences between greedy algorithms and dynamic programming, solving a recurrence relation using the master method, time and space complexity analysis, characteristics of algorithms, and sorting arrays using heap sort.

Uploaded by

13anil67
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

B.

Tech V Sem
Design and Analysis of Algorithms

Unit-I

2 Marks Questions

1. Difference Between Binary Tree , Complete Binary Tree and Full Binary Tree
(2017-18)
In binary tree every node contain at most 2 children. That means a node can have no
children or one children or two children.

In a complete binary tree all levels except the last are completely filled, and in the last level
is filled left to right up to a point. This means that all nodes have two children except the
nodes at the last two levels. At the lowest level the nodes have (by definition) zero children,
and at the last but one level can have 0, 1 or 2 children

In a full binary tree all nodes have either 0 or 2 children. Both types of nodes can appear at all
levels in the tree

Not every full binary tree is a complete binary tree. in a full binary tree leaves can appear
at any level, not just the last two, and the last level does not need to be filled from left to
right without leaving gaps.
Not every complete binary tree is a full binary tree. There is a node with just one child.
However, in a complete binary tree there will always be at most one node that does not
satisfy the definition for full binary tree.

2. Difference between Greedy Technique and Dynamic programming.

Swapna Singh KCS-503 DAA


(2017-18)

A Greedy algorithm obtains an optimal solution to a problem by making a sequence


of choices. For each decision point in the algorithm, the choice that seems best at the
moment is chosen. This choice is referred as Greedy choice. Greedy choice property
– “ A globally optimal solution can be arrived at by making a locally optimal (greedy)
choice”.

In Dynamic Programming, we make a choice at each step, but the choice usually
depend on the solutions to the subproblems. Consequently we solve dynamic
programming problems in a bottom up manner, progressing from smaller subproblems
to larger subproblems.

In a greedy algorithm, we make whatever choice seems best at the moment and then
solves the subproblem arising after the choice is made. Thus, unlike dynamic
programming, which solves the subproblems bottom up, a greedy strategy usually
progresses in a top down fashion, making one greedy choice after another.

Not all optimization problems can be solved with greedy approach. For eg., Fractional
Knapsack problem can be solved using greedy approach, whereas, we cannot solve 0-
1 knapsack using greedy approach.

For I=3, Pi = (60,100,120), Wi = (10,20,30) and Knapsack capacity = 50.


Pi/Wi = (6,5,4)
Using greedy for 0-1 problem, we will select I1 and I2 to get a profit of 160 which is
not optimal. Optimal solution will be I2 and I3 = 220

3. Solve the following recurrence using Master method: T (n) = 4T (n/3) + n2


(2017-18)

Here a = 4, b = 2 , f(n) = n2 , p = 0, k = 2
logba = log24 = 2 = k, so case 2 applies.
Therefore, T(n) = (n2lg n)

4. Name the sorting algorithm that is most practically used and also write its Time
Complexity. (2017-18)

Quicksort is one of the most efficient sorting algorithms, and this makes of it one of
the most used as well. The first thing to do is to select a pivot number, this number
will separate the data, on its left are the numbers smaller than it and the greater
numbers on the right. With this, we got the whole sequence partitioned. After the data
is partitioned, we can assure that the partitions are oriented, we know that we have
bigger values on the right and smaller values on the left. The quicksort uses this divide
and conquer algorithm

5. Find the time complexity of the recurrence relation (2017-18)


T(n)= n +T(n/10)+T(7n/5)

Swapna Singh KCS-503 DAA


Let the guess be T(n) = O(n)
T(n)<= cn,
Using Substitution method
T(n) = n+n/10+7n/5
= (10n+n+14n)/10 <=cn  25/10 <=c. Therefore, T(n) = O(n)

6. Compare Time Complexity & Space Complexity (2017-18)

The complexity of an algorithm is defined by a function that describes the efficiency


of the algorithm in terms of the amount of data the algorithm must process. There are
two main complexity measures of the efficiency of an algorithm:
Time complexity is a function describing the amount of time an algorithm takes in
terms of the amount of input to the algorithm. "Time" can mean the number of memory
accesses performed, the number of comparisons between integers, the number of times
some inner loop is executed, or some other natural unit related to the amount of real
time the algorithm will take. Several factors unrelated to the algorithm that can affect
the real time (like the language used, type of computing hardware, proficiency of the
programmer, optimization in the compiler, etc.) are not considered.
Space complexity is a function describing the amount of memory (space) an algorithm
takes in terms of the amount of input to the algorithm. The measure is independent of
the actual number of bytes needed to represent the unit.

Space complexity is sometimes ignored because the space used is minimal and/or
obvious, but sometimes it becomes as important an issue as time.

7. What are the characteristics of the algorithm? (2017-18)


An algorithm is a finite set of instructions that if followed, accomplishes a particular
task. In addition, all algorithms must satisfy the following criteria:
 Input - zero or more quantities are externally supplied.
 Output – at least one quantity is produced.
 Definiteness – each instruction is clear and unambiguous.
 Effectiveness – every instruction must be very basic so that it can be carried out,
in principle, by a person using only pencil and paper. Each operation must be
feasible.
Algorithms that are definite and effective are also called computational procedures.
Algorithms consists of following characteristics :
 Correctness
 Efficiency – asymptotic complexity
 Modelling – graphs, data structures, decomposing the problem
 Techniques – divide and conquer, greedy, dynamic programming.

8. Rank the following by growth rate : (2017-18)
lgn 2 lgn n
2 , log n, log(log n), log n, (lg n) , 4, (3/2) , n!  4, log(logn), logn,
10 Marks Questions
9. Solve the following by recursion tree method: T(n)= n+T(n/5)+T(4n/5).
(2017-18)
 In a recursion tree, each node represents the cost of a single subproblem.
 We sum the costs within each level of the tree to obtain a set of per level costs and
then we sum all the per level costs to determine the total cost of all levels of
recursion.

Swapna Singh KCS-503 DAA


 cn term at the root represents the cost at the top level of recursion, and the two
subtrees of the root represent the costs incurred by the subproblems of size n/5 and
4n/5.
 The cost at each level is the sum of the cost of all subproblems at that level and it
is equal to cn.
 As the size of the subproblem decreases as we go further from the root, we must
eventually reach a boundary condition.
 The subproblem size for a node at depth i = cn/5i and cn(4/5)i.
 Thus, subproblem size hits n=1

min. height Max. height


i i
n/5 =1 n (4/ 5) = 1
i i
n=5 n = (5/4)
i = log5n i = log5/4n

 Total cost = Number of levels * cost per level


 In this case, number of levels = (log5n + 1) and O(log5/4n + 1)
 Or, ignoring details we can say no. of levels = ⍬(logn + 1)
 Therefore, Total cost = ⍬(logn + 1) * cn = ⍬(nlogn + n) = ⍬(nlogn)

10. Explain Heap Sort on the array. Illustrate the operation of Heap Sort on the
array A = {6, 14, 3, 25, 2, 10, 20, 7, 6} (2017-18)

Using Max-buildheap(), an in-place sorting algorithm is easily constructed :


o Maximum element is at A[1]
o Discard by swapping with element at A[n]
 Decrement heap_size[A] by 1
 A[n] now contains correct value
o Restore heap property at A[1] by calling max-heapify()
o Repeat, always swapping A[1] for a[heap_size(A)]
Heapsort(A)
{
1. Build-Max-Heap(A)
2. For i  length[A] down to 2
3. Do exchange A[1]  A[i]

Swapna Singh KCS-503 DAA


4. Heap-size[A]  Heap-size[A] - 1
5. Max-Heapify(A, 1)
}

Heap 
6

14 3

25
2 10 20

Apply Build Max Heap Max-Heap

7 6 25

14 20

25 2
0

7 2 10 3
1 2
4 10 3
Now execute lines 2-5
6 6
7 6
6 20

20 14 10
14

7 2 10 3 7 2 6 3

6 25 6 25

6 14

10 7 10
14

7 2 6 2
6 3
6 3

20 25 20 25

Swapna Singh KCS-503 DAA


3
10

7 10
7 6

6 2
6 14 6 2
3 14

20 25
20 25

3 7

6 6 6
7

6 2
10 14 3 2
10 14

20 25 20 25

2 6

6 6
6 3

3 7
10 14 2 7 10 14

20 25 20 25

Swapna Singh KCS-503 DAA


2
3

3 6
2 6

6 7 10 14
6 7 10 14

20 25
20 25

3 6

6 7 10 14

20 25
The array is sorted A=[2, 3, 6, 6, 7, 10, 14, 20, 25]

11. The recurrence T(n) = 7T(n/2) +n2 describe the running time of an algorithm A.
Another competing algorithm B has a running time of T’(n) = aT’(n/4) + n2. What
is the largest integer value for ‘a’ so that B is asymptotically faster than A.
(2017-18)

Algorithm A  T(n) = 7T(n/2) +n2. Here a=7, b=2 p=0 and k=2

Comparing logb a and k  log27 > 2  2.807 > 2

Algorithm B  T(n) = aT(n/4) + n2.

For B to be faster than A, the largest possible value of ‘a’ should be such that log4 a
Is less than 2.807
So, equating log27 = log4 a

 log 7 / log 2 = log a/ log 4


 log a = 2.807 * log 4 = 1.6899
 Taking antilog : - a = 49

At ‘a’ = 49 both A and B will have same growth. So, the largest integer value for a = 48,
B will be faster than A

Swapna Singh KCS-503 DAA


2018-19
1. Use recursion tree to give an asymptotically tight solution on the recurrence T(n)
= T(n) + T((1-)n) +cn, where  is an constant in the range 0<<1 and c>0 also
a constant.

The recursion tree is full for log1/(1−α) n levels, each contributing cn, so we guess
Ω(n log1/(1−α) n) = Ω(n lg n).
It has log1/α n levels, each contributing ≤ cn, so we guess O(n log1/α n) = O(n lg n)

T (n) = Θ(n lg n) by substitution.

• Guess: T(n) = O(nlgn)


o Induction goal: T(n) ≤ d nlgn , for some d and n ≥ n0
o Induction hypothesis: T(𝛼n) ≤ d (𝛼n lg𝛼n) and T((1- 𝛼)𝑛) ≤ d(1- 𝛼)𝑛lg(1-
𝛼)𝑛)
• Proof of induction goal:
• T(n) ≤ d𝛼n lg𝛼n + d(1- 𝛼)𝑛lg(1- 𝛼)𝑛) + cn
• ≤ d𝛼𝑛[𝑙𝑔𝛼 + lgn] + d(1- 𝛼 )n[lg(1- 𝛼 ) + lgn] + cn
• = d𝛼𝑛𝑙𝑔𝑛 + d𝛼𝑛lg𝛼 + d(1-𝛼)𝑛𝑙𝑔𝑛 + d(1 − 𝛼 )nlg(1-𝛼) + cn
• = [d𝛼 + 𝑑(1 − 𝛼)]𝑛𝑙𝑔𝑛 + d𝛼𝑛lg𝛼 + d(1 − 𝛼 )nlg(1-𝛼) + cn
• = dnlgn + dn[𝛼lg 𝛼 + (1- 𝛼)lg(1- 𝛼) + cn
• ≤ dnlgn if d[𝛼lg 𝛼 + (1- 𝛼)lg(1- 𝛼) + c ≤ 0
• T(n) = O(nlgn)

2. The recurrence T(n) = 7T(n/3) +n2 describe the running time of an algorithm
A. Another competing algorithm B has a running time of S(n) = aS(n/9) + n2.
What is the smallest integer value for ‘a’ so that A is asymptotically faster
than B.

Algorithm A  T(n) = 7T(n/3)+n2


a = 7, b=3 and f(n)=n2 p=0, k=2

Comparing logb a and k  log37 < 2  1.77 < 2

Swapna Singh KCS-503 DAA


Therefore T(n) = Θ(n2)

For Algorithm B S(n) = aS(n/9) + n2


a = a, b=9 and f(n)=n2 , p=0, k=2

To compare logb a and k  log9a and 2


The complexity of B will be at least Θ(n2) for all values of a < 81 so B will be
as fast as A
For a = 81 S(n) = Θ(n2lgn) according to the Case 2 of Master’s Method

So, the smallest value of ‘a’ for A to be faster than B is 81 because any value less
than 81 will make time complexities of A and B of the order of Θ(n2)

3. Sort the following array A =[23,9,18,45,5,9,1,17,6] using heapsort.

45
23

23 18
9 18
Build-Max-Heap
17 5 9 1
45 5 9 1

9 6
17 6

Exchange A[1] and A[9] and reduce the Heap size by 1

6 23

23 18 17 18
Max-Heapify (A,1)

17 5 9 1 9 5 9 1

9 45 6 45

Exchange A[1] and A[8] and reduce the Heap size by 1

6 18

18 17 9
17 Max-Heapify (A,1)

9 5 9 1 9 5 6 1

Swapna Singh KCS-503 DAA


23 45 23 45
Exchange A[1] and A[7] and reduce the Heap size by 1

17
1

9 9
17 9
Max-Heapify (A,1)

1 5 6 18
9 5 6 18

23 45
23 45

Exchange A[1] and A[6] and reduce the Heap size by 1

6 9

9 6 9
9 Max-Heapify (A,1)

1 18 1 5 17 18
5 17

23 45 23 45

Exchange A[1] and A[5] and reduce the Heap size by 1

9
5

6 5
6 9
Max-Heapify (A,1)

1 9 17 18
1 9 17 18

23 45
23 45

Exchange A[1] and A[4] and reduce the Heap size by 1

6
1

1 5
6 5 Max-Heapify (A,1)

9 9 17 18
9 9 17 18

23 45
23 45

Swapna Singh KCS-503 DAA


Exchange A[1] and A[3] and reduce the Heap size by 1

5
5

Max-Heapify (A,1) 1 6
1 6

9 9 17 18
9 9 17 18

23 45
23 45

Exchange A[1] and A[2] and reduce the Heap size by 1

5 6

9 9 17 18

23 45

So the sorted array is a[1,5,6,9,9,17,18,23,45]

2019-20
4. How do you compare the performance of various algorithms? 2020-21,2021-
22

An algorithm is a finite set of instructions that if followed, accomplishes a


particular task. In addition, all algorithms must satisfy the following criteria:

o Input - zero or more quantities are externally supplied.


o Output – at least one quantity is produced.
o Definiteness – each instruction is clear and unambiguous.
o Effectiveness – every instruction must be very basic so that it can be carried
out, in principle, by a person using only pencil and paper. Each operation must
be feasible.

Algorithms that are definite and effective are also called computational
procedures. as an algorithm is executed, it uses the cpu time and memory to hold
the program and data. Analysis refer to the task of determining how much

Swapna Singh KCS-503 DAA


computing time and storage an algorithm requires in terms of the input data to the
algorithm.

In general, the time taken by an algorithm grows with the size of the input, so the
running time of a program is described as the function of the size of its input.

o Running time : of an algorithm on a particular input is the number of primitive


operations or steps executed. A constant amount of time is required to execute
each line of pseudocode. One line may take a different amount of time than
another line, each execution of the ith line takes time ci, where ci is a constant.
o Complexity
o Time complexity –running time of the program as a function of the size of the
input.  CPU usage
o Space complexity –amount of computer memory required during the program
execution, as a function of input size.  RAM usage

Time complexity represents the number of times a statement is executed. The


time complexity of an algorithm is NOT the actual time required to execute a
particular code, since that depends on other factors like programming language,
operating software, processing power, etc. The idea behind time complexity is that
it can measure only the execution time of the algorithm in a way that depends only
on the algorithm itself and its input.

Different Running Time

o Worst case running time : It is an upper bound on the running time for any
input. Knowing it gives us a guarantee that the algorithm will never take any
longer time. (At most).
o Average case running time : It is an estimate of the running time for an
average input. In average case analysis, we take all possible inputs and
calculate computing time for all of the inputs. Sum all the calculated values
and divide the sum by total number of inputs.
o Best case running time : It is the function defined by the minimum number
of steps taken on any instance of size n (at least). In the best case analysis, we
calculate lower bound on running time of an algorithm. We must know the
case that causes minimum number of operations to be executed.
o Amortized running time : Here the time required to perform a sequence of
related operations is averaged over all the operations performed. It guarantees
the average performance of each operation in the worst case.

5. Arrange the following functions in ascending order of growth rates :


n2.5, 2n, n+10, 10n, 100n, n2logn

n+10 < 10n < 100n < n2logn < n2.5 < 2n

6. Solve the recurrence T(n) = 2T(n/2) + n2 +2n+1

Here a=2, b=2 and f(n) = n2 +2n+1. So f(n) = O(n2) for n>1 an c=4

K=2 and p=0

Swapna Singh KCS-503 DAA


Using Masters method comparing logba and k

logba = log22 = 1 and k=2


So Case 3 applies
Checking for regularity condition af(n/b)  cf(n) for c < 1
 2(n/2)2  c n2  2/4 n2  c n2  c=1/2 <1

Therefore, T(n) = 𝜽(n2)

7. Prove that worst case running time of any comparison sort is (nlogn)

A decision tree can model the execution of any comparison sort:

• One tree for each input size n.


• View the algorithm as splitting whenever it compares two elements.
• The tree contains the comparisons along all possible instruction traces.
• The running time of the algorithm = the length of the path taken.
• Worst-case running time = height of tree.

Any decision tree that sorts n elements has height Ω(n lg n).

Proof:
There are n! Possible outcomes. Thus there are at least n! leaves in the decision
tree.

A binary tree of height h has at most 2h leaves.


Thus, 2h ≥ n!. We have h ≥ lg(n!) ≥Θ(n lg n).

8. Among Merge sort, Insertion sort and Quick sort which one is the best in
worst case. Use that algorithm to sort the list E,X,A,M,P,L,E is alphabetic
order. 2021-22.

Worst case of Merge sort = O(nlgn)


Worst case of Insertion sort is O(n2) when the input array is sorted in decreasing
order.
Worst case of Quick sort is O(n2) and it occurs when the input array is sorted in
any way (increasing or decreasing)

So, Merge sort has best worst case.

Merge sort algorithm is based on following approach.

• Given a sequence of n elements a[1],a[2],…….a[n], the general idea is to split


them into two sets a[1],a[2],…a[⎿n/2⏌] and a[⎿n/2⏌+1]…..a[n].
• Each set is individually sorted, and the resulting sorted sequences are merged
to produce a single sorted sequence of n elements.
• Merge is the key operation in merge sort.
• Suppose the (sub)sequence(s) are stored in the array a. Moreover, a[p..q] and
a[q+1..r] are two sorted subsequences.

Swapna Singh KCS-503 DAA


• Merge(a,p,q,r) will merge the two subsequences into sorted sequence a[p..r]
• Merge(a,p,q,r) takes (r-p+1).

Merge-sort(a,p,r) Initial call to merge-sort(a,1,n)

a. If p < r
b. Then q  (p+r)/2
c. Merge-sort(a,p,q)
d. Merge-sort(a,q+1,r)
e. Merge(a,p,q,r)

To sort E,X,A,M,P,L,E using Merge sort

E, X, A, M, P, L, E

E X A M P L E

E X A M P L E

E X. A M P L E

E A X M P E L

A E X E L M P

A E E L M P X

9. Solve the following recurrence using recursion tree method


T(n) = T(n/2) + T(n/4) + T(n/8) + n
Cost per level
cn cn

cn/2 cn/4 cn/8


cn 7/8
log8n

cn/8 cn 49/64
cn/4 cn/8 cn/16 cn/16 cn/32 cn/16 cn/32 cn/64
log2n

. . . . . . . . .
(1) (1) (1) (1) (1) (1) (1) (1) (1)

size of node at depth i = n/2i size of node at depth i = n/8i


This will be equal to 1 at the leaves This will be equal to 1 at the leaves
 n/2i = 1 . i = log2n  n/8i = 1 . i = log8n

Swapna Singh KCS-503 DAA


Total cost = T(n) = cn ( )^1

= cn ( )^1  cn ( )^1  cn[1/(1-7/8). = 8cn

Hence, T(n) = O(n)

2020-21

10. What is recurrence relation ? How is recurrence solved using Master’s


method.?
A recurrence relation, T(n), is a recursive function of integer variable n.
• Like all recursive functions, it has both recursive case and base case.
• Example:

• The portion of the definition that does not contain t is called the base case of
the recurrence relation; the portion that contains t is called the recurrent or
recursive case.
• Recurrence relations are useful for expressing the running times (i.e., The
number of basic operations executed) of recursive algorithms
Master’s Method can be used to solve the recurrences of the form
T(n) = a T(n/b) + f(n)

 Describe the running time of an algorithm that divides a problem of size n


into a subproblems, each of size n/b
 a  1 and b > 1 (constant),
 f(n): asymptotically positive
 Require memorization of three cases
 Floor and ceiling functions are omitted

The recurrence T(n) = a T(n/b) + f(n). can be bounded asymptotically as follows

Swapna Singh KCS-503 DAA


T(n)=aT(n/b)+θ(nklogpn)
Master's Theorem states that:
 Case 1) If logba> k then:
o T(n) = θ (nlogba)

 Case 2) If logba=k. then:


o a) If p>-1, then T(n) = θ(nklog(p+1)n)
o b) If p=-1, then T(n) = θ(nkloglogn)
o c) p<-1, then T(n)= θ(nk)

 Case 3) If logba<k. then:


o If p>=0, then T(n)= θ(nklogpn)
o If p<0, then T(n) = θ(nk)

11. What is asymptotic notation? Explain  notation.

To study the asymptotic efficiency of algorithms

• We look at input sizes large enough to make only the order of growth of the
running time function relevant.
• That is, we are concerned with how the running time of an algorithm increases
in the limit with the size of the input as the size of the input increases without
bound.
• Usually an algorithm that is asymptotically more efficient will be the best
choice for all but very small inputs.
• Asymptotic complexity defines the running time of an algorithm as a function
of input size n for large n. It is expressed using only the highest-order term in
the expression for the exact running time.
• Θ, Ω, O, , o, ⍵
• Defined for functions over the natural numbers.
• Ex: f(n) = Q(n2).
• Describes how f(n) grows in comparison to n2.
• Define a set of functions; in practice used to compare two function sizes.
• The notations describe different rate-of-growth relations between the
defining function and the defined set of functions.

 notation

For function g(n), we define Ω(g(n)), big-omega of n, as the set: Ω(g(n)) = {f(n) : 
positive constants c and n0, such that we have. 0 ≤ cg(n) ≤ f(n) ∀ n ≥ n0}

• Set of all functions whose rate of growth is the same as or higher than that of
g(n). g(n) is an asymptotic lower bound for f(n).

• F(n) = θ(g(n))  f(n) = Ω(g(n))  Θ(g(n)) ⊂ Ω(g(n)). G(n) is an


asymptotic lower bound for f (n).

Swapna Singh KCS-503 DAA


12. Solve the recurrence T(n) = 4T(n/2) + n2

Using Master’s Method a = 4, b = 2, k = 2, p = 0

Logba = log24 = 2. And K = 2

Case 2 applies and as p >-1 T(n) = θ(n2lgp+1n) = θ(n2lgn)

13. Write an algorithm for counting sort. Illustrate A=[4,0,2,0,1,3,5,4,1,3,2,3]

Counting sort is not a comparison sort and it runs in linear time


a. for i0 to k
b. do C[i] 0
c. for j 1 to length[A]
d. do C[A[j]] C[A[j]]+1
e. // C[i] contains number of elements equal to i.
f. for i 1 to k
g. do C[i]=C[i]+C[i-1]
h. // C[i] contains number of elements  i.
i. For j length[A] downto 1
j. do B[C[A[j]]] A[j]
k. C[A[j]] C[A[j]]-1
A=[4,0,2,0,1,3,5,4,1,3,2,3]. n=12
Highest magnitude number = 5 = k
C=[0,0,0,0,0,0]. /**0 to 5
for j 1 to n
C=[2,2,2,3,2,1]
C[A[j]] C[A[j]]+1

C=[2,4,6,9,11,12] for i 1 to k
C[i]=C[i]+C[i-1]

Swapna Singh KCS-503 DAA


i 1 2 3 4 5 6 7 8 9 10 11 12

A[i] 4 0 2 0 1 3 5 4 1 3 2 3

i 1 2 3 4 5 6 7 8 9 10 11 12

B[i] 0 0 1 1 2 2 3 3 3 4 4 5

C=[2,4,6,9,11,12]

j A[j] C[A[j]] B[C[A[j]]]


12 3 9 B[9]=3
11 2 6 B[6]=2
10 3 8 B[8]=3
9 1 4 B[4]=1
8 4 11 B[11]=4
7 5 12 B[12]=5
6 3 7 B[7]=3
5 1 3 B[3]=1
4 0 2 B[2]=0
3 2 5 B[5]=2
2 0 1 B[1]=0
1 4 10 B[10]=4

14. Solve following recurrence :


T(n) = T(n-1) + n4 (ii). T(n) = T(n/4) + T(n/2) + n2

(i) T(n) = T(n-1) + n4


Using Masters method for decreasing function
a = 1, b = 1 and f(n) = O(n4) k=4
If a = 1 then T(n) = O(nk+1) = O(n5)

(ii). T(n) = T(n/4) + T(n/2) + n2

Swapna Singh KCS-503 DAA


15. Write and analyse Insertion Sort.

INSERTION SORT
Cost No. of Times
1 For i ← 2 to length[A] C1 n
2 { key ← A[ i ] C2 n-1
3 j←i–1 C3 n-1
5 while j > 0 and A[j ] > key C4 i=2 to n ∑ ti
6 do {a[j + 1] ← A[j ] C5 i=2 to n ∑ (ti -1)
7 j ←j – 1} C6 i=2 to n ∑ (ti -1)
8 A[j + 1] ← key } C7 n-1
ti: number of times the while statement is executed at each iteration of i
 Total Cost T(n) = C1n + C2(n-1) + C3(n-1) + C4 ∑ ti + C5 ∑ (ti − 1) +
C6∑ (ti − 1) + C7(n-1)
 Best Case:
o Input array is already sorted. Then for each i = 2,3,….,N , key >= a[j] when j
has its initial value of i-1.
o Thus ti = 0 for i = 2,3,….,N.
o T(n)= c1n + c2(n-1) + c3(n-1)+ c4(n-1) + c5 *0 + c6*0 + c7(n-1)
o T(n) = (c1 + c2 + c4 + c5 + c8)n + (c2 + c4 + c5 + c8)
o T(n) = an + b = θ(n). T(n) is a linear function of n.

 Worst case analysis


o Input array is sorted in reverse order. Here we must compare each element A(i)
with each element in the entire sorted sub array A[1….i-1] and so t i = i for i =
2,3,…..,N
o We know that ∑ 𝑖 = n(n+1)/2 ,
o we can write. ∑ 𝑖 = n(n+1)/2 – 1 and ∑ 𝑖 − 1 = n(n+1)/2 –
n = n(n-1)/2
o In worst case , the running time of insertion sort is

o T(n) = c1n + c2(n-1) + c3(n-1)+ c4[n(n+1)/2 – 1] + c5 [n(n-1)/2] + c6*[n(n-1)/2]


+ c7(n-1)
= c1n + c2(n-1) + c3(n-1)+ c4[(n2+n-2)/2]+ c5[(n2-n)/2] + c6 [(n2-n)/2]+ c7(n-1)

o = an2 + bn + c a quadratic function of n


o T(n) = θ(n2) order of growth in n2

Swapna Singh KCS-503 DAA


2021-22
16. (i) Solve the recurrence T(n) = 3T(n/4)+n2 using recursion tree method

c.n2 c.n2
n
Log4 Suppose n = 4k

c.(n/4)2 c.(n/4)2 c.(n/4)2 (3/16).c.n2

2 2 2
c(n/16)2 c(n/16)2 c(n/16) c(n/16)2 c(n/16)2 c(n/16) c(n/16) c(n/16) c(n/16)
2 2

(3/16)2.c.n2

(3/42)k-1 cn2
T(n/4k) T(n/4k) T(n/4k) T(n/4k) T(n/4k) T(n/4k) T(n/4k) T(n/4k)
T(1) T(1) T(1) T(1) T(1) T(1) T(1) T(1) T(1) T(1) T(1) T(1)

o The subproblem size for a node at depth i is n/4i


 When the subproblem size is 1  n/4i = 1i=log4n
 The tree has log4n+1 levels (0, 1, 2,.., Log4n)
o The cost at each level of the tree (0, 1, 2,.., Log4n-1)
 Number of nodes at depth i is 3i
 Each node at depth i has a cost of c(n/4i)2
 The total cost over all nodes at depth i is 3i c(n/4i)2=(3/16)icn2
o The cost at depth log4n
 Number of nodes is. 3log4n = nlog43
 Each contributing cost T(1)
 The total cost = T(1) * nlog43 = θ(nlog43)

3 3 3
T ( n) = cn 2 + cn 2 + ( ) 2 cn 2 + ... + ( ) log 4 n -1 cn 2 + ( n log 4 3 )
16 16 16
log 4 n -1
3 i 2
= å
i =0
(
16
) cn + ( n log 4 3 )
¥
3 i 2
 å( ) cn + ( n log 4 3 )
i =0 16
1 16 2
= cn 2 + ( n log 4 3 ) = cn + ( n log 4 3 )
3 13
1-
16
= O(n 2 )

Swapna Singh KCS-503 DAA


(ii) T(n) = n+2T(n/2) using Iteration method usingT(1)=1

When k=1, we had T(n) = 2T(n/2) + 2


T(n) = 2T(n/2) + 2 →T(n) = 2T(n/2) + 2(1)

When k=2, we had T(n) = 4T(n/4) + 4 + 2


T(n) = 4T(n/4) + 4 + 2 → T(n) = 4T(n/4) + 2(2 + 1)

When k=3, we had T(n) = 8T(n/8) + 8+ 4 + 2


T(n) = 8T(n/8) + 8+ 4 + 2 → T(n) = 8T(n/8) + 2( 4 + 2 +1)

When k=4, we had T(n) = 16T(n/16)+16 + 8+ 4 + 2


T(n) = 16T(n/16)+16 + 8+ 4 + 2 → T(n) = 16T(n/16)+2(8+ 4 + 2 +1)

When k=5, we had T(n) = 32T(n/32)+ 32 +16 + 8+ 4 + 2


T(n) = 32T(n/32)+ 32 +16 + 8+ 4 + 2
= T(n) = 32T(n/32)+ 2 (16 + 8+ 4 + 2 + 1)

Notice the general form in terms of ‘k’ looks like:


T(n) = 2^k * T(n/ (2^k) + 2 *Summation from i=0 to k-1 of 2^i

Base case n/(2^k) = 1 = n = (2^k)= k = log2n

T(n) = 2^log2n * T(n/2^log2n)) + 2 * (2^log2n – 1)

T(n) = n * T(n/n) + 2(n-1)


= nT(1) + 2n-2 = nC + 2n – 2 = n(C+2) – 2
T(n) = O(n) for c>=1 , n>=1

17. Write Merge sort and use it to sort {23,11,5,15,68,31,4,17} Ans 8 [2019-20]

23, 11, 5, 15, 68, 31, 4, 17

23, 11, 5, 15 68, 31, 4, 17

23, 11 5, 15 68, 31 4, 17

23 11 5 15 68 31 4 17

11, 23 5, 15 31, 68 4, 17

5, 11, 15, 23 4, 17, 31, 68

Swapna Singh 4, 5, 11, 15, 17, 23, 31, 68


KCS-503 DAA
18. What do you mean by stable and unstable sorting. Sort A = [25, 57, 48, 36, 12,
91, 86, 2] using Heap sort.

A sorting algorithm is said to be stable if two objects with equal keys appear in
the same order in sorted output as they appear in the input data set.
Informally, stability means that equivalent elements retain their relative positions,
after sorting.
Example :- Insertion, Merge Counting sort

Unstable sorting algorithms does not possess this property.


Example :- Quick sort, Heap sort

25 Build Max-Heap 91

57 48
57 86

36 12 91 86
36 12 48 25

2
2
Exchange A[1] and A[8] and reduce the Heap size by 1

2 86
Max-Heapify(A,1)
57 86 48
57

36 12 48 25 36 25
12 2

91 91

Exchange A[1] and A[7] and reduce the Heap size by 1

25
57

57 48
Max-Heapify(A,1) 36 48

36 12 2 86
25 12 2 86

91
91

Swapna Singh KCS-503 DAA


Exchange A[1] and A[6] and reduce the Heap size by 1

2 48

48 36 2
36 Max-Heapify(A,1)

25 86 25 12 57 86
12 57

91 91

Exchange A[1] and A[5] and reduce the Heap size by 1


36
12

25 2
36 2
Max-Heapify(A,1)
12 48 57 86
25 48 57 86

91
91
Exchange A[1] and A[4] and reduce the Heap size by 1

12 25

25 2 Max-Heapify(A,1) 12 2

36 48 57 86 36 48 57 86

91 91

Exchange A[1] and A[3] and reduce the Heap size by 1

2 12
Max-Heapify(A,1)
12 25 2 25

36 48 57 86 36 48 57 86

91 91

Swapna Singh KCS-503 DAA


Exchange A[1] and A[2] and reduce the Heap size by 1

So the sorted array A =


2
[2, 12, 25, 36, 48, 57, 86, 91]
12 25

36 48 57 86

91

Swapna Singh KCS-503 DAA

You might also like