0% found this document useful (0 votes)
0 views16 pages

DAA UNIT I

This document provides an introduction to algorithms, detailing their specifications, performance analysis, and various techniques for representation and design. It covers key factors in algorithm design, including pseudo code conventions, recursive algorithms, and performance metrics such as time and space complexity. Additionally, it discusses different algorithm design techniques like divide and conquer, greedy methods, and dynamic programming, along with their advantages and disadvantages.

Uploaded by

6uxk0l75jf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
0 views16 pages

DAA UNIT I

This document provides an introduction to algorithms, detailing their specifications, performance analysis, and various techniques for representation and design. It covers key factors in algorithm design, including pseudo code conventions, recursive algorithms, and performance metrics such as time and space complexity. Additionally, it discusses different algorithm design techniques like divide and conquer, greedy methods, and dynamic programming, along with their advantages and disadvantages.

Uploaded by

6uxk0l75jf
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 16

UNIT I

Introduction: Algorithm Specification, Performance Analysis -Space complexity, Time


complexity, Asymptotic Notations (Big-oh notation, Omega notation, Theta notation).

INTRODUCTION TO ALGORITHMS:
An algorithm is a description or statement of sequence of activities that constitute a process of
getting desired outputs from the given inputs. All algorithms must satisfy the following
criteria.
1. Input: Zero or more inputs will be supplied externally.
2. Output: At least one quantity will be produced.
3. Definiteness: Each instruction should be clear and unambiguous.
4. Finiteness: The algorithm should terminate after a finite number of steps.
5. Effectiveness: The instructions are very much simple to implement and not depended
on any external knowledge to understand.
Key factors for designing an algorithm:
1. How to design an algorithm?
2. How to validate an algorithm?
3. How to analyze an algorithm?
4. How to test an algorithm?

Ex 1:
Algorithm LargestNumber
Input: A non-empty list of numbers L.
Output: The largest number in the list L.
largest ← L0
for each item in the list (Length(L)≥1), do
if the item > largest, then
largest ← the item
Ex 2:
Algorithm SumofIndividualDigits
Step 1: Input N
Step 2: Sum = 0
Step 3: While (N != 0)
Rem = N % 10;
Sum = Sum + Rem;
N = N / 10;
Step 4: Print Sum

TECHNIQUES TO REPRESENT AN ALGORITHM:


Specifying algorithms are done by two ways;
1. Pseudo code Convention
2. Recursive algorithms
Pseudo code Convention:
Algorithms are described in natural language like English. Graphical representations are called
as flowcharts. Algorithms are different from a program. An algorithm consists of a name and
a body.
Rules:
1. An Algorithm is a procedure which consists of heading and body.
2. Comment line begins with //.
3. The compound statements should be in between { }.
4. Identifiers should begin with a letter. The data types of variables are not explicitly
declared.
5. Statements are delimited by a ;
6. Assignment of values to variables are done by using :=.
Ex: a:=10;
7. For producing true or false values, the logical operators and, or and the relational
operators <. <=, >, >= and # are used.
8. Elements of array are accessed using [ ].
9. For, while, repeat..until loops are used for looping.
a. for var:= v1 to v2 step <<stepvalue>> do
{
Statement block;
}
Here V1, V2 and step value are the arithmetic expressions. V1 and V2 are the
integer or real or numeric constants. The clause step value is optional, if not
specified it will be taken as +1.
Ex: Algorithm for Sum of ‘n’ numbers.
b. while (condition) do
{
Statement block;
}
The statement block gets executed until the given condition is true. The
condition is evaluated at the beginning of the loop.
Ex: Algorithm for Sum of ‘n’ Even numbers.
c. repeat
{
Statement block;
} until (condition);
The statement block gets executed as long as the given condition is false. The
condition is evaluated after executing the statement block.
Ex: Algorithm for finding factorial of a given number.

10. A conditional statement has the following forms;


a. If (condition) then { Statements; }
b. If (condition) then { Statements; } else { Statements; }
11. Inputs and outputs are done using read and write.
Algorithms can be specified as follows;
Algorithm name (parameter list)
{
Statements;
}
Where,
Name specifies the name of the procedure and parameter list is a list of parameters. The body
of the algorithm is having one or more statements enclosed within curly braces.
Ex: algorithm for finding and returning maximum value from ‘n’ given numbers.
1. Algorithm max(A, n)
2. {
3. Result = A[1];
4. for i := 2 to n do
5. If A[i] < Result then Result := A[i];
6. Return Result;
7. }

Recursive Algorithms:
An Algorithm calls itself is said to be recursion.
Types:
1. Direct recursion: An Algorithm can call themselves is said to direct recursion.
Ex: int fun(int x)
{
if (x<=0)
return x;
else
return (fun(x-1));
}
2. Indirection recursion: Algorithm A is said to be indirect recursive if it calls another
algorithm which in turn call A.
Ex: int fun(int x)
{
if(x<=0)
return x;
return (fun1(x-1));
}
int fun1(int y)
{
return fun(y);
}
Advantages of pseudo code conventions over flow charts:
The advantage of pseudo code over flowchart is that;
a. It is very much similar to the final program code.
b. It requires less time and space to develop
c. We can write it in our own way as there are no fixed rules.
d. It can be read and understood easily.
e. Converting a pseudocode to programming language is very easy as compared with
converting a flowchart to programming language.

Disadvantages :
 It is not visual.
 We do not get a picture of the design.
 There is no standardised style or format, so one pseudo code may be different from
another.
 For a beginner, It is more difficult to follow the logic or write pseudo code as compared
to flowchart.

TECHNIQUES TO DESIGN AN ALGORITHM:

1. Divide and Conquer:


It follows top-down approach to solve a problem. This technique solves problems in 3
steps. They are;
a. Divide: In this step the original problem is divided into sub problems.
b. Conquer: In this step every sub problem is solved individually.
c. Combine: In this step all the sub problems are merged to get the solution of the
original problem.

2. Greedy Technique:
This method is used to solve an optimization problem. This algorithm does not always
guarantee the optimal solution but it generally produces solutions that are very close to
the optimal value
3. Dynamic Programming:
It is similar to divide and conquer but it follows bottom-up approach to solve a problem.
In dynamic programming the result is obtained from the smaller sub problems are
reused in the calculation of larger sub problems. So it avoids re- computations, which
reduces the time while solving a problem.

4. Backtracking:
Backtracking follows Depth First Search from the possible set of solutions. During
search, if an alternative doesn’t work, the search backtracks and tries with next
alternative.

5. Branch and Bound:


It is a general optimized technique, that applies where the Greedy and Dynamic
Programming fails. Branch and Bound uses Breadth First Search. Branch and Bound
is the most widely used tool for solving large scale NP Hard optimization problems.

PERFORMANCE ANALYSIS:
Algorithm analysis refers to the task of determining computing time and storage space
requirements of an algorithm. It is also known as performance analysis. It enables us to select
an efficient algorithm. The analysis can be computed in two phases. They are;
1. Priori analysis: This is the theoretical analysis of an algorithm. Efficiency of an
algorithm is measured by assuming that the factors like processor speed is constant and
has no effect on implementation.
2. Posteriori analysis: The selected algorithm is implemented in any programming
language and is executed on target machine. In this analysis actual time and space
requirement are collected. It is also known as empirical analysis.

Performance of an algorithm can be measured in the following ways;


 Space Complexity
 Time Complexity
 Amortized Complexity
 Asymptotic Notation
 Performance Measurement
Space Complexity:
When we design an algorithm, space is required to;
 Store program instructions
 Store constant values
 Variable values
Space Requirement:
 Instructional Space: The memory which is used to store compiled version of
instructions.
 Environmental Stack: The memory which is used to store the details about partially
executed functions.
 Data Space: The memory which is used to store variables and constants.

S(p) = C + Sp (instance characteristics)


Where,
C= Constant
Sp = Variable Space
The space needed by each algorithm is the sum of the following components;
a. Fixed Part: It is independent of number and size of inputs and outputs. This part
typically includes instruction space, space of simple variables, fixed-size components
and space for constants, etc.
b. Variable Part: It consists of space needed by component variable, reference variables
and the recursion stack space.

The memory required by an algorithm is increased when input is increased, then the
space complexity is said to be linear space complexity.

Ex 1:
1. Algorithm Sum (a, b, c)
2. {
3. return (a+b+c);
4. }
If each variable needs one word then S(P) = 3.
Space Complexity = O(1)
Ex 2:
1. Algorithm Sum (a, n)
2. {
3. s := 0.0;
4. for i := 1 to n do
5. s:= s + a[i];
6. return s;
7. }
Space for the algorithm is;
Size of variable n - 1 word
Size of array a - n words
Loop variable i - 1 word
Sum variable s - 1 word

Total = n+3 words


Therefore S(p) >= (n+3)
Space Complexity = O(n)
Ex 3: Recursion
1. Algorithm Rsum(a,n)
2. { if (n<=0) then return 0.0;
3. Else return rsum(a, n=1) + a[n];
4. }
Space for the algorithm is;
Return address - 1 word
Pointer to a - 1 word
Local variable n - 1 word
Each recursion needs 3 words.
Depth of recursion is n+1.
Therefore S(p) >= 3(n+1)
Time Complexity:
Time complexity of an algorithm is the amount of computing time it needs to run to complete
its task. It is the sum of compile time and execution time.
Compile Time:
 Does not depends on instance characteristics
 Also once compiled program will be run several times without recompilation
Execution Time:
 Depends on particular problem instance i.e. inputs and outputs.
T(p) = C+ tp
Where,
C = Compile Time Tp = Run Time
Calculation of Time Complexity:
1. Count the number of steps in each iteration in the program execution.
2. Count only executable statements.
Ex:
Comments - 0 steps
Assignment - 1 step
Condition - 1 step
Loop - (n+1) steps
Body of the loop - n steps
The amount of time required by an algorithm is increased when input is increased, then
the time complexity is said to be linear time complexity.
Ex 1:
St.No. Statement S/E Frequency Total Steps
1 Algorithm Sum(a, n) 0 - 0
2 { 0 - 0
3 S:=0; 1 1 1
4 for i :=1 to n 1 N+1 N+1
5 S:=S + a[i]; 1 N N
6 return S; 1 1 1
7 } 0 - 0
Total Number of Steps: 2N+3
S/E - 0 – Not Executed 1 - Executed
Ex 2:
St.No. Statement S/E Frequency Total Steps
1 Algorithm Add(a, b, c, m, n) 0 - 0
2 { 0 - 0
3 for i :=1 to m 1 M+1 M+1
4 for j:=1 to n 1 M(N+1) MN+M
5 C[i][j]:= a[i][j] + 1 MN MN
b[i][j];
6 } 0 - 0
Total Number of Steps: 2MN+2M+1

Ex 3 : Recursion
Frequency Total Steps
St.No. Statement S/E
N=0 N>0 N=0 N>0
1 Algorithm Rsum(a, n) 0 - - 0 0
2 { 0 - - 0 0
3 if (n<=0) then 1 1 1 1 1
4 return 0; 1 1 0 1 0
5 else 0 0 - 0 0
6 return Rsum(a, n-1) + 1 0 1+X 0 1+X
a[n];
7 } 0 - 0 0
Total Number of Steps: 2 2+X
T(n) = 2 for (n=0) and T(n) = 2+X for (n>0)
Ex 4: Fibonacci Series
Frequency Total Steps
St.No. Statement S/E
N<=1 N>1 N<=1 N>1
1 Algorithm Fib(n) 0 - - 0 0
2 { 0 - - 0 0
3 if (n<=1) then 1 1 - 1 1
4 write 1; 1 1 0 1 0
5 else 0 0 - 0 0
6 { 0 0 0 0 0
7 f2:=0; f1: = 1; 2 0 2 0 2
8 for i:=2 to n do 1 0 N 0 N
9 { 0 0 0 0 0
10 f:= f1+ f2; 1 0 n-1 0 N-1
11 f2:= f1; f1:= f; 2 0 2(n-1) 0 2N-2
12 } 0 0 0 0 0
13 write (f); 1 0 1 0 1
14 } 0 0 0 0 0
15 } 0 - 0 0 0
Total Number of Steps: 2 4N+1

Exercise :
Write an algorithm to find the sum of first n integers and derive its time complexity.
Amortized Complexity:
It is the total expense per operation evaluated over a sequence of operations. The popular
methods to evaluate amortized costs for operations are;
 Aggregate Method
 Accounting Method
 Potential Method
Asymptotic Notation:
Time complexity is represented by using asymptotic notations. It calculates;
 Best Case Complexity: In this case the time / space required by a specific algorithm is
low.
Ex: Sort the numbers 1 2 3 4 5 6 in ascending order
Here, all the numbers are already in ascending order, hence the algorithm will take the
minimum time.
 Average Case Complexity: In this case the time / space requirement is more than the
best case but less than the worst case for the specific algorithm.
Ex: Sort the numbers 1 2 3 5 4 6 in ascending order
Here, few numbers are already in ascending order and few numbers are unsorted; hence
the algorithm will take more time when compared to best case.
 Worst Case Complexity: In this case the time / space requirement is more than the
averages case.
Ex: Sort the numbers 6 5 4 3 2 1 in ascending order
Here, all the numbers are unsorted, hence the algorithm will take maximum time.

Big-Oh (O): It is a method of representing upper bound of algorithms running time. (Worst
Case)
Definition: Let f and g are nonnegative functions. The function f(n) =O(g(n)) iif there exist
positive constants c and no such that f(n) <= c * g(n) for all n, n>=n0.

Big-oh expressions do not have constants or low-order terms. The idea of f(n) is the exact
complexity of an algorithm as a function of the problem size ‘n’ and that g(n) is the upper-
bound on that complexity. Number of upper bounds may be found but take the least upper
bound g(n) among all the upper bounds.
Ex: f(n) = 3n2+5
Condition : f(n)<=c*g(n) for C>0 and n>=1.
3n2+5 <= c.g(n)
Check c value from 1. Better to take the c value as 4.
Check n value from 1.
N=1 3(12)+5 ≤ 4(12)  8 ≤ 4 (F)
N=2 3(22)+5 ≤ 4(22)  17 ≤ 16 (F)
N=3 3(32)+5 ≤ 4(32)  32 ≤ 36 (T)
F(n) <= c*g(n) if C=4 and n>2.

Example 1:
Let’s Consider, f(n) = 6x3 + 11x2 + 1
Here, to calculate the big O notation, we must find the upper bound of this function. The upper
bound of the function depends on the power of the variable components.
As we know,
f(n) = 6x3 + 11x2 + 1 <= 18 x3, hence f(n) can be denoted as O(n3)
Example 2:
Let’s Consider, f(n) = log (nn)
Here, to calculate the big O notation, we must find the upper bound of this function. The upper
bound of the function depends on the power of the variable components.
As we know,
f(n) = log (nn) = n log n, hence f(n) can be denoted as O(n log(n))

Example 3:
f(n) = 3n2 + 4n + 1. Show f(n) is O(n2).
4n <= 4n2 for all n >= 1
and 1 <= n2 for all n >= 1

so 3n2 + 4n + 1 <= 3n2 + 4n2 + n2 for all n >= 1


<= 8n2 for all n >= 1
So we have shown f(n) <= 8n2 for all n >= 1
so f(n) is O(n2). (c = 8, n0 = 1)
Omega (Ω): This is used to represent the lower bound of an algorithms running time. It
denotes shortest amount of time taken by the algorithm. (Best Case)
Definition: Let f and g are nonnegative functions. The function f(n) = (g(n)) iif there exist
positive constants c and no such that f(n) >= c * g(n) for all n, n>=n0.

Example 1:
Let’s Consider, f(n) = 6x3 + 11x2 + 1
Here, to calculate the big Ω notation, we must find the lower bound of this function. The lower
bound of the function depends on the power of the variable components.
As we know,
f(n) = 6x3 + 11x2 + 1 >= 18 x2, hence f(n) can be denoted as Ω (n2)

Example 2:
Let’s Consider, f(n) = log (nn)
Here, to calculate the big Ω notation, we must find the lower bound of this function. The lower
bound of the function depends on the power of the variable components.
As we know,
f(n) = log (nn) = n log n, hence f(n) can be denoted as Ω (n log(n))
Theta (Ө): This is used to represent the average bound of an algorithms running time. It
denotes average amount of time taken by the algorithm. (Average Case)
Definition: Let f and g are nonnegative functions. The function f(n) = Ө (g(n)) iif there exist
positive constants c1, c2 and no such that c1 * g(n) <= f(n) <= c2 * g(n) for all n, n>=n0.

Example 1:
Let’s Consider, f(n) = 6x3 + 11x2 + 1
Here, to calculate the big Ө notation, we must find the average bound of this function. The
average bound of the function depends on the power of the variable components.
As we know,
f(n) = 6x3 + 11x2 + 1 <= 18 x2 <= 6x3 + 11x2 + 1, hence f(n) can be denoted as Ө(n2)
Example 2:
Let’s Consider, f(n) = log (nn)
Here, to calculate the big Ө notation, we must find the average bound of this function. The
average bound of the function depends on the power of the variable components.
As we know,
f(n) = log (nn) = n log n, hence f(n) can be denoted as Ө (n log(n))
Little-Oh (o):
It is a theoretical measure of the execution of an algorithm, usually the time or memory needed,
given the problem size n, which is usually the number of items.

Definition: Let f and g are nonnegative functions. The function f(n) = o(g(n)) iif f(n) < c* g(n)
for any positive constants C>0 and no>0 and n>n.

Big Oh is the set of all functions with a smaller or the same order of growth f(n).
Ex: O(n2) = {n2, 100n+5, logn, etc.....}
Little Oh is the set of all functions with a smaller rate of growth f(n).
Ex: o(n2) = {100n+5, logn, etc.....}

Important Questions
1. Explain asymptotic notations with examples.
2. Analyze performance analysis of algorithm and devise matrix multiplication algorithm
and give time complexity.
3. Analyze algorithm specification and explain properties with an example.
4. Define time and space complexity. Describe different notations used to represent these
complexities.
5. Write different pseudo code conventions used to represent an algorithm.
6. Give the algorithm for addition of two matrices and determine the time complexity of
this algorithm by frequency – count method.

You might also like