0% found this document useful (0 votes)
21 views

DAA Module1

Good note
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
21 views

DAA Module1

Good note
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

DESIGN AND ANALYSIS OF ALGORITHMS

Module-1

What is an Algorithm?
An algorithm is a finite sequence of well-defined set of instructions to solve a
problem.
What are the characteristics of an algorithm?
1. Unambiguous− Algorithm should be clear and unambiguous. Each of its steps
and their inputs/outputs should be clear and must lead to only one meaning.
2. Input− An algorithm should have 0 or more well-defined inputs.
3. Output− An algorithm should have one or more well-defined outputs, and
should match the desired output.
4. Finiteness− Algorithms must terminate after a finite number of steps. The
algorithm should not run for infinity.
5. Representation – The same algorithm can be represented in several different
ways. Pseudo code is the most common way of representing an algorithm.
Pseudo code is a mixture of natural language and programming language
constructs. Flowchart is another way of representing an algorithm.
6. Feasibility− Should be feasible with the available resources.
7. Definiteness -Algorithms must specify every step and the order of the steps.
8. There may exist several algorithms for solving the same problem

What are the properties of a good algorithm?


1. Correctness –The algorithm should always produce the required result for all
valid inputs. It should be able to handle special cases or unusual inputs
2. Efficiency- There are two kinds of algorithm efficiency namely time efficiency
and space efficiency. Time efficiency indicates how fast the algorithm runs.
Space efficiency indicates the amount of memory it uses.The algorithm should
solve the problem in the least possible time and least possible space.
3. Finiteness: The algorithm must terminate for all possible inputs. It should not
run indefinitely or enter an infinite loop, as this would render it unusable in
practice.
4. Generality – The algorithm should address the problem in general. It should not
be confined to specific cases of a problem.

What is algorithm design technique?


An algorithm design technique is a general approach to solving problems
algorithmically that is applicable to a variety of problems from different areas of
computing. Following are some of the main algorithm design techniques:
1. Brute-force or exhaustive search.
2. Divide and Conquer.
3. Greedy Algorithms.
4. Dynamic Programming.
5. Back Tracking
6. Branch and Bound Algorithm.
7. Randomized Algorithm.

Briefly explain various algorithm design techniques


Brute-force technique
The brute force design is a technique of problem-solving in which all the
possible solutions to a given problem are enumerated. The brute force method is ideal
for solving small and simpler problems. The brute force approach is inefficient. Brute
force algorithms are slow.

Divide and conquer


A divide-and-conquer algorithm recursively breaks down a problem into two
or more sub-problems of the same or related type, until these become simple enough
to be solved directly. The solutions to the sub-problems are then combined to give a
solution to the original problem.

Greedy Algorithms
Greedy Algorithms works step-by-step, and always chooses the steps which
provide immediate profit/benefit. It chooses the locally optimal solution, without
thinking about future consequences. Greedy algorithms may not always lead to the
optimal global solution, because it does not consider the entire data.

Dynamic Programming
Dynamic Programming is a technique in computer programming that helps to
efficiently solve a class of problems that have overlapping sub problems and optimal
substructure property. A problem is said to have optimal substructure if an optimal
solution can be constructed from optimal solutions of its sub problems. The solutions
to these sub problems can be saved for future reference (memorization) so that when
their solutions are required, they are at hand and we do not need to recalculate them.
This method of solving a solution is referred to as dynamic programming.

Back tracking
Backtracking is an algorithm design technique which is used to solve problems
in which a sequence of objects are chosen from a finite set so that the sequence
satisfies some criteria. This approach is used to solve problems that have multiple
solutions. Backtracking can be used to solve well-known problems such as n-queens
problem, sum of subsets, graph colouring problem, Hamiltonian cycle problem etc.

Branch and bound


Branch and bound is an algorithm design technique used for solving
optimization problems. The Branch and Bound is very similar to Backtracking, that it
uses a state space tree to solve a given problem. But the difference is that, the
backtracking algorithm is used to find a set of solutions whereas Branch and bound is
used to solve optimization problems, that too minimization problems.

Randomized algorithm
Randomized algorithm is one whose behaviour depends on the inputs and the
random choices are made as part of its logic. As a result, the algorithm gives different
outputs even for the same input.

Define space complexity of an algorithm


The space complexity of an algorithm is the amount of memory space required
to solve an instance of the computational problem. It is expressed as a function of the
input size and is denoted by the Big O notation.
Define Time Complexity of an algorithm
The time complexity is the amount of time it takes to run an algorithm. Time
complexity is commonly estimated by counting the number of elementary operations
performed by the algorithm, supposing that each elementary operation takes a fixed
amount of time to perform.
What are the important problem types in computer science?
1. Sorting
2. Searching
3. String processing
4. Graph problems
5. Combinatorial problems
6. Geometric problems
7. Numerical problems

What is sorting problem?


The sorting problem is to rearrange the items of a given list in an increasing or
decreasing order. For this problem, the nature of the list items must allow such an
ordering. We usually need to sort lists of numbers, characters, strings and records. In
the case of records, sorting is done on the basis of a key. A sorting algorithm is said
to be stable if it preserves the relative order of any two equal elements in its input. A
sorting algorithm is said to be in-place if it does not require extra memory to perform
the sorting.
What is searching problem?
The searching problem deals with finding a given value called the search key in a
given set. There are plenty of searching algorithms such as sequential search, binary
search etc. Some of these algorithms works faster but requires more memory. Some
algorithms are easy to implement but are very slow. Searching algorithms such as
binary search requires a sorted list as the input.
What is String Processing problem?
A string is a sequence of characters from an alphabet. String matching is a problem of
special interest in string processing problems. String matching means searching for a
given word in a text.
What is Graph problem?
A graph is a collection of points called vertices which are connected by line segments
called edges. Graphs can be used for modelling a wide variety of applications such as
transportation, communication, social and economic networks, project scheduling and
games. Some of the well-known graph problems are Traveling Salesman Problem
(TSP) and graph-colouring problem.
What are Combinatorial Problems?
These are problems that seeks to find a combinatorial object such as a
permutation, that is, a combination that satisfies certain constraints. Combinatorial
problems involve finding a grouping, ordering, or assignment of a discrete, finite set
of objects that satisfies given conditions. Combinatorial problems are the most
difficult problems in computer science. Even for a small sized combinatorial problem,
the number of combinatorial objects may be very large.
What are Geometric problems?
These problems deals with geometric objects such as points, lines, polygons
etc. It has applications in computer graphics. Two well-known geometric problems are
closest-pair problem and convex-hull problem. The closest-pair problem is that, for
given n points in plane we have to find out the closest pair of points among them. The
convex-hull problem asks to find the smallest convex polygon that would include all
the points of a given set.
What are numerical problems?
Numerical problems deals with mathematical objects such as system of
equations, computing definite integrals, evaluating functions etc.
What is meant by ‘Analysis of Algorithms’?
Analysis of algorithms means an investigation of an algorithm’s efficiency with
respect to two resources namely running time and memory space.
Efficiency considerations of an algorithm
Efficiency of an algorithm is measured in terms of time efficiency and space
efficiency. Time efficiency also called time complexity indicates how fast the
algorithm runs. Space efficiency also called space complexity refers to the amount of
memory units required by the algorithm in addition to the space needed for its input
and output.
What is meant by asymptotic analysis?

An algorithm may not have the same performance for different types of inputs. With
the increase in the input size, the performance will change. The study of change in
performance of the algorithm with the change in the order of the input size is defined
as asymptotic analysis.

There are three different types of asymptotic analysis.


1. Worst-case analysis
2. Best -case analysis
3. Average-case analysis.

The average case analysis is not easy to do in most practical cases and is rarely done.
In the average case analysis, we need to predict the mathematical distribution of all
possible inputs.

What is meant by worst-case analysis of an algorithm?


The worst-case efficiency of al algorithm is its efficiency for the worst-case input of
size n, for which the algorithm runs the longest. The worst-case efficiency of an
algorithm is calculated by counting the number of basic operations to be done for this
input n. We have to check what kind of inputs yield the largest value of the basic
operations count, and this count Cworst(n) denotes the worst-case efficiency of that
algorithm. The worst-case efficiency guarantees that for any instance of size n, the
running time will not exceed Cworst(n).
What is meant by best-case analysis of an algorithm?
The best-case efficiency of an algorithm is its efficiency for the base-case input of size
n, for which the algorithm runs the fastest. The best-case efficiency is calculated by
counting the number of basic operations performed for the input n. We have to check
what kind of inputs yield the smallest value of the basic operations count, and this
count Cbest(n) denotes the best-case efficiency of that algorithm. The best-case
efficiency denotes the minimum time the algorithm would take to complete.
What is meant by average-case analysis of an algorithm?
The average case efficiency of an algorithm denotes the time taken by the algorithm to
complete its task for a normal or random input. To calculate the average-case
efficiency, we have to check the time taken for several inputs of size n, and an average
of these values is calculated. In the average case analysis, we take all possible inputs
and calculate the computation time for all inputs. Add up all the calculated values and
divide the sum by the total number of entries.
The average case analysis is not easy to do in most practical cases and is rarely done.
In the average case analysis, we need to predict the mathematical distribution of all
possible inputs.

What is asymptotic notation? What are the three asymptotic notations?

Asymptotic Notation is used to describe the running time of an algorithm. It denotes


how much time an algorithm takes with a given input, n. There are three different
notations: big O, big Theta (Θ), and big Omega (Ω)

Big-O Notation (O-notation)


Big-O notation represents the upper bound of the running time of an algorithm. Thus,
it gives the worst-case complexity of an algorithm.

Omega Notation (Ω-notation)


Omega notation represents the lower bound of the running time of an algorithm. Thus,
it provides the best case complexity of an algorithm.

Theta Notation (Θ-notation)


Theta notation encloses the function from above and below. Since it represents the
upper and the lower bound of the running time of an algorithm, it is used for analysing
the average-case complexity of an algorithm.

Give examples to illustrate best-case, worst-case, and average-case scenarios of


an algorithm.
Consider the problem of searching for a target element in a list using linear search.
1. Best-case Scenario
This occurs when the algorithm performs the minimum number of operations
required.If the target element is the first item in the list, the algorithm only needs one
comparison to find it. Hence Time Complexity in this case is O(1).
2. Worst-case Scenario
This happens when the algorithm performs the maximum number of operations
possible.If the target element is not in the list or it is the last element, then the
algorithm has to check every item in the list to conclude.Hence Time Complexity in
this case is O(n), where n is the number of elements in the list.
3. Average-case Scenario
This is the expected performance of the algorithm, averaged over all possible inputs.
If the target is somewhere in the middle of the list, on average, the algorithm has to
look through half of the list before finding the target. Time Complexity in this case is
O(n/2), which simplifies to O(n).
What is Iteration Algorithm?
In the case of Iterative algorithms, a certain set of statements are repeated a certain
number of time. An Iterative algorithm will use looping statements such as for loop,
while loop or do-while loop to repeat the same steps number of time.
What is recursion? why elimination of recursion is favorable?
Recursion is a programming technique where a function calls itself to solve smaller
instances of the same problem until it reaches a base case, which stops the recursion.
While recursion can simplify solving problems like tree traversal or divide-and-
conquer algorithms, eliminating recursion is often favorable because it reduces
memory usage and prevents stack overflow by avoiding excessive use of the call
stack. Additionally, iterative solutions can improve performance by eliminating the
overhead of function calls and making the code easier to understand, debug, and
maintain. Iterative methods also provide better control over execution and are more
universally efficient, especially in environments without tail call optimization.
Binary Search Algorithm
Binary search is a search algorithm that finds the position of a key or target value
within a array. Binary search compares the target value to the middle element of the
array; if they are unequal, the half in which the target cannot lie is eliminated and the
search continues on the remaining half until it is successful.
This search algorithm works on the principle of "Divide and Conquer". Binary Search
first divide the large array into smaller sub-arrays and then solve Recursively (or
iteratively). For this algorithm to work properly, the data collection must be in the
"sorted" form.
Iterative Binary Search
Key is the number to be searched in the list of elements. Inside the while loop, "mid"
is obtained by calculating (low+high)/2.
If number at position mid equal to target element then the control returns index of
mid-element by printing that the number has been found along with the index at which
it was found.
*Else, if key or target is less than number at position mid then the portion of the Array
from the mid upwards is removed from contention by making "high" equal to mid-1.
Else, it implies that key element is greater than number at position mid. Hence, the
portion of the list from mid and downwards is removed from contention by making
"low" equal to mid+1.
The while loop continues to iterate in this way till either the element is returned or low
becomes greater than high ,in which case -1 is returned indicating that key could not
be found and the same is printed as output.
The pseudocode is as follows:

int binarySearch(int[] A, int x)


{
int low = 0, high = A.length - 1;
while (low <= high)
{
int mid = (low + high) / 2;
if (x == A[mid]) {
return mid;
}
else if (x < A[mid]) {
high = mid - 1;
}
else {
low = mid + 1;
}
}

return -1;
}

Recursive Binary Search


In the recursive implementation of binary search algorithm, a recursive call back is
made to the same method with the new values of low and high passed to the next
recursive invocation.The condition of low > high is checked at the beginning of the
next level of recursion and acts as the boundary condition for stopping further
recursive calls by returning false.
The pseudocode is as follows:
int binarySearch(int[] A, int low, int high, int x)
{
if (low > high) {
return -1;
}
int mid = (low + high) / 2;
if (x == A[mid]) {
return mid;
}
else if (x < A[mid]) {
return binarySearch(A, low, mid - 1, x);
}
else {
return binarySearch(A, mid + 1, high, x);
}
}

****

You might also like