DAA UNIT I
DAA UNIT I
INTRODUCTION TO ALGORITHMS:
An algorithm is a description or statement of sequence of activities that constitute a process of
getting desired outputs from the given inputs. All algorithms must satisfy the following
criteria.
1. Input: Zero or more inputs will be supplied externally.
2. Output: At least one quantity will be produced.
3. Definiteness: Each instruction should be clear and unambiguous.
4. Finiteness: The algorithm should terminate after a finite number of steps.
5. Effectiveness: The instructions are very much simple to implement and not depended
on any external knowledge to understand.
Key factors for designing an algorithm:
1. How to design an algorithm?
2. How to validate an algorithm?
3. How to analyze an algorithm?
4. How to test an algorithm?
Ex 1:
Algorithm LargestNumber
Input: A non-empty list of numbers L.
Output: The largest number in the list L.
largest ← L0
for each item in the list (Length(L)≥1), do
if the item > largest, then
largest ← the item
Ex 2:
Algorithm SumofIndividualDigits
Step 1: Input N
Step 2: Sum = 0
Step 3: While (N != 0)
Rem = N % 10;
Sum = Sum + Rem;
N = N / 10;
Step 4: Print Sum
Recursive Algorithms:
An Algorithm calls itself is said to be recursion.
Types:
1. Direct recursion: An Algorithm can call themselves is said to direct recursion.
Ex: int fun(int x)
{
if (x<=0)
return x;
else
return (fun(x-1));
}
2. Indirection recursion: Algorithm A is said to be indirect recursive if it calls another
algorithm which in turn call A.
Ex: int fun(int x)
{
if(x<=0)
return x;
return (fun1(x-1));
}
int fun1(int y)
{
return fun(y);
}
Advantages of pseudo code conventions over flow charts:
The advantage of pseudo code over flowchart is that;
a. It is very much similar to the final program code.
b. It requires less time and space to develop
c. We can write it in our own way as there are no fixed rules.
d. It can be read and understood easily.
e. Converting a pseudocode to programming language is very easy as compared with
converting a flowchart to programming language.
Disadvantages :
It is not visual.
We do not get a picture of the design.
There is no standardised style or format, so one pseudo code may be different from
another.
For a beginner, It is more difficult to follow the logic or write pseudo code as compared
to flowchart.
2. Greedy Technique:
This method is used to solve an optimization problem. This algorithm does not always
guarantee the optimal solution but it generally produces solutions that are very close to
the optimal value
3. Dynamic Programming:
It is similar to divide and conquer but it follows bottom-up approach to solve a problem.
In dynamic programming the result is obtained from the smaller sub problems are
reused in the calculation of larger sub problems. So it avoids re- computations, which
reduces the time while solving a problem.
4. Backtracking:
Backtracking follows Depth First Search from the possible set of solutions. During
search, if an alternative doesn’t work, the search backtracks and tries with next
alternative.
PERFORMANCE ANALYSIS:
Algorithm analysis refers to the task of determining computing time and storage space
requirements of an algorithm. It is also known as performance analysis. It enables us to select
an efficient algorithm. The analysis can be computed in two phases. They are;
1. Priori analysis: This is the theoretical analysis of an algorithm. Efficiency of an
algorithm is measured by assuming that the factors like processor speed is constant and
has no effect on implementation.
2. Posteriori analysis: The selected algorithm is implemented in any programming
language and is executed on target machine. In this analysis actual time and space
requirement are collected. It is also known as empirical analysis.
The memory required by an algorithm is increased when input is increased, then the
space complexity is said to be linear space complexity.
Ex 1:
1. Algorithm Sum (a, b, c)
2. {
3. return (a+b+c);
4. }
If each variable needs one word then S(P) = 3.
Space Complexity = O(1)
Ex 2:
1. Algorithm Sum (a, n)
2. {
3. s := 0.0;
4. for i := 1 to n do
5. s:= s + a[i];
6. return s;
7. }
Space for the algorithm is;
Size of variable n - 1 word
Size of array a - n words
Loop variable i - 1 word
Sum variable s - 1 word
Ex 3 : Recursion
Frequency Total Steps
St.No. Statement S/E
N=0 N>0 N=0 N>0
1 Algorithm Rsum(a, n) 0 - - 0 0
2 { 0 - - 0 0
3 if (n<=0) then 1 1 1 1 1
4 return 0; 1 1 0 1 0
5 else 0 0 - 0 0
6 return Rsum(a, n-1) + 1 0 1+X 0 1+X
a[n];
7 } 0 - 0 0
Total Number of Steps: 2 2+X
T(n) = 2 for (n=0) and T(n) = 2+X for (n>0)
Ex 4: Fibonacci Series
Frequency Total Steps
St.No. Statement S/E
N<=1 N>1 N<=1 N>1
1 Algorithm Fib(n) 0 - - 0 0
2 { 0 - - 0 0
3 if (n<=1) then 1 1 - 1 1
4 write 1; 1 1 0 1 0
5 else 0 0 - 0 0
6 { 0 0 0 0 0
7 f2:=0; f1: = 1; 2 0 2 0 2
8 for i:=2 to n do 1 0 N 0 N
9 { 0 0 0 0 0
10 f:= f1+ f2; 1 0 n-1 0 N-1
11 f2:= f1; f1:= f; 2 0 2(n-1) 0 2N-2
12 } 0 0 0 0 0
13 write (f); 1 0 1 0 1
14 } 0 0 0 0 0
15 } 0 - 0 0 0
Total Number of Steps: 2 4N+1
Exercise :
Write an algorithm to find the sum of first n integers and derive its time complexity.
Amortized Complexity:
It is the total expense per operation evaluated over a sequence of operations. The popular
methods to evaluate amortized costs for operations are;
Aggregate Method
Accounting Method
Potential Method
Asymptotic Notation:
Time complexity is represented by using asymptotic notations. It calculates;
Best Case Complexity: In this case the time / space required by a specific algorithm is
low.
Ex: Sort the numbers 1 2 3 4 5 6 in ascending order
Here, all the numbers are already in ascending order, hence the algorithm will take the
minimum time.
Average Case Complexity: In this case the time / space requirement is more than the
best case but less than the worst case for the specific algorithm.
Ex: Sort the numbers 1 2 3 5 4 6 in ascending order
Here, few numbers are already in ascending order and few numbers are unsorted; hence
the algorithm will take more time when compared to best case.
Worst Case Complexity: In this case the time / space requirement is more than the
averages case.
Ex: Sort the numbers 6 5 4 3 2 1 in ascending order
Here, all the numbers are unsorted, hence the algorithm will take maximum time.
Big-Oh (O): It is a method of representing upper bound of algorithms running time. (Worst
Case)
Definition: Let f and g are nonnegative functions. The function f(n) =O(g(n)) iif there exist
positive constants c and no such that f(n) <= c * g(n) for all n, n>=n0.
Big-oh expressions do not have constants or low-order terms. The idea of f(n) is the exact
complexity of an algorithm as a function of the problem size ‘n’ and that g(n) is the upper-
bound on that complexity. Number of upper bounds may be found but take the least upper
bound g(n) among all the upper bounds.
Ex: f(n) = 3n2+5
Condition : f(n)<=c*g(n) for C>0 and n>=1.
3n2+5 <= c.g(n)
Check c value from 1. Better to take the c value as 4.
Check n value from 1.
N=1 3(12)+5 ≤ 4(12) 8 ≤ 4 (F)
N=2 3(22)+5 ≤ 4(22) 17 ≤ 16 (F)
N=3 3(32)+5 ≤ 4(32) 32 ≤ 36 (T)
F(n) <= c*g(n) if C=4 and n>2.
Example 1:
Let’s Consider, f(n) = 6x3 + 11x2 + 1
Here, to calculate the big O notation, we must find the upper bound of this function. The upper
bound of the function depends on the power of the variable components.
As we know,
f(n) = 6x3 + 11x2 + 1 <= 18 x3, hence f(n) can be denoted as O(n3)
Example 2:
Let’s Consider, f(n) = log (nn)
Here, to calculate the big O notation, we must find the upper bound of this function. The upper
bound of the function depends on the power of the variable components.
As we know,
f(n) = log (nn) = n log n, hence f(n) can be denoted as O(n log(n))
Example 3:
f(n) = 3n2 + 4n + 1. Show f(n) is O(n2).
4n <= 4n2 for all n >= 1
and 1 <= n2 for all n >= 1
Example 1:
Let’s Consider, f(n) = 6x3 + 11x2 + 1
Here, to calculate the big Ω notation, we must find the lower bound of this function. The lower
bound of the function depends on the power of the variable components.
As we know,
f(n) = 6x3 + 11x2 + 1 >= 18 x2, hence f(n) can be denoted as Ω (n2)
Example 2:
Let’s Consider, f(n) = log (nn)
Here, to calculate the big Ω notation, we must find the lower bound of this function. The lower
bound of the function depends on the power of the variable components.
As we know,
f(n) = log (nn) = n log n, hence f(n) can be denoted as Ω (n log(n))
Theta (Ө): This is used to represent the average bound of an algorithms running time. It
denotes average amount of time taken by the algorithm. (Average Case)
Definition: Let f and g are nonnegative functions. The function f(n) = Ө (g(n)) iif there exist
positive constants c1, c2 and no such that c1 * g(n) <= f(n) <= c2 * g(n) for all n, n>=n0.
Example 1:
Let’s Consider, f(n) = 6x3 + 11x2 + 1
Here, to calculate the big Ө notation, we must find the average bound of this function. The
average bound of the function depends on the power of the variable components.
As we know,
f(n) = 6x3 + 11x2 + 1 <= 18 x2 <= 6x3 + 11x2 + 1, hence f(n) can be denoted as Ө(n2)
Example 2:
Let’s Consider, f(n) = log (nn)
Here, to calculate the big Ө notation, we must find the average bound of this function. The
average bound of the function depends on the power of the variable components.
As we know,
f(n) = log (nn) = n log n, hence f(n) can be denoted as Ө (n log(n))
Little-Oh (o):
It is a theoretical measure of the execution of an algorithm, usually the time or memory needed,
given the problem size n, which is usually the number of items.
Definition: Let f and g are nonnegative functions. The function f(n) = o(g(n)) iif f(n) < c* g(n)
for any positive constants C>0 and no>0 and n>n.
Big Oh is the set of all functions with a smaller or the same order of growth f(n).
Ex: O(n2) = {n2, 100n+5, logn, etc.....}
Little Oh is the set of all functions with a smaller rate of growth f(n).
Ex: o(n2) = {100n+5, logn, etc.....}
Important Questions
1. Explain asymptotic notations with examples.
2. Analyze performance analysis of algorithm and devise matrix multiplication algorithm
and give time complexity.
3. Analyze algorithm specification and explain properties with an example.
4. Define time and space complexity. Describe different notations used to represent these
complexities.
5. Write different pseudo code conventions used to represent an algorithm.
6. Give the algorithm for addition of two matrices and determine the time complexity of
this algorithm by frequency – count method.