0% found this document useful (0 votes)
49 views

Bca Daa 01

1st module

Uploaded by

apoorvaappu367
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views

Bca Daa 01

1st module

Uploaded by

apoorvaappu367
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

Design and Analysis of an Algorithm-DSC_13

DESIGN AND ANALYSIS OF ALGORITHM


What is an algorithm?
The word Algorithm means ” A set of finite rules or instructions to be followed in calculations or
other problem-solving operations ”
Or
” A procedure for solving a mathematical problem in a finite number of steps that frequently
involves recursive operations”.

Use of the Algorithms:


Algorithms play a crucial role in various fields and have many applications. Some of the key areas
where algorithms are used include:
• Computer Science: Algorithms form the basis of computer programming and are used to
solve problems ranging from simple sorting and searching to complex tasks such as
artificial intelligence and machine learning.
• Mathematics: Algorithms are used to solve mathematical problems, such as finding the
optimal solution to a system of linear equations or finding the shortest path in a graph.
• Operations Research: Algorithms are used to optimize and make decisions in fields such
as transportation, logistics, and resource allocation.
• Artificial Intelligence: Algorithms are the foundation of artificial intelligence and
machine learning, and are used to develop intelligent systems that can perform tasks such
as image recognition, natural language processing, and decision-making.
• Data Science: Algorithms are used to analyze, process, and extract insights from large
amounts of data in fields such as marketing, finance, and healthcare.
What are the Characteristics of an Algorithm?
• Clear and Unambiguous: The algorithm should be unambiguous. Each of its steps should
be clear in all aspects and must lead to only one meaning. 
• Well-Defined Inputs: If an algorithm says to take inputs, it should be well-defined inputs.
It may or may not take input.

Prof. Apoorva M S CIT_Mandya Dept of MCA 1


Design and Analysis of an Algorithm-DSC_13

• Well-Defined Outputs: The algorithm must clearly define what output will be yielded and
it should be well-defined as well. It should produce at least 1 output.
• Finiteness: The algorithm must be finite, i.e. it should terminate after a finite time.
• Feasible: The algorithm must be simple, generic, and practical, such that it can be executed
with the available resources. It must not contain some future technology or anything. 
• Language Independent: The Algorithm designed must be languageindependent, i.e. it
must be just plain instructions that can be implemented in any language, and yet the output
will be the same, as expected.
• Input: An algorithm has zero or more inputs. Each that contains a fundamental operator
must accept zero or more inputs.
• Output: An algorithm produces at least one output. Every instruction that contains a
fundamental operator must accept zero or more inputs.
• Definiteness: All instructions in an algorithm must be unambiguous, precise, and easy to
interpret. By referring to any of the instructions in an algorithm one can clearly understand
what is to be done. Every fundamental operator in instruction must be defined without any
ambiguity.
• Finiteness: An algorithm must terminate after a finite number of steps in all test cases.
Every instruction which contains a fundamental operator must be terminated within a finite
amount of time. Infinite loops or recursive functions without base conditions do not possess
finiteness.
• Effectiveness: An algorithm must be developed by using very basic, simple, and feasible
operations so that one can trace it out by using just paper and pencil.
Properties of Algorithm:
• It should terminate after a finite time.
• It should produce at least one output.
• It should take zero or more input.

Prof. Apoorva M S CIT_Mandya Dept of MCA 2


Design and Analysis of an Algorithm-DSC_13

• It should be deterministic means giving the same output for the same input case.
• Every step in the algorithm must be effective i.e. every step should do some work.
Types of Algorithms:
There are several types of algorithms available. Some important algorithms are:
1. Brute Force Algorithm:
It is the simplest approach to a problem. A brute force algorithm is the first approach that comes
to finding when we see a problem.
2. Recursive Algorithm:
A recursive algorithm is based on recursion. In this case, a problem is broken into several sub-parts
and called the same function again and again.
3. Backtracking Algorithm:
The backtracking algorithm builds the solution by searching among all possible solutions. Using
this algorithm, we keep on building the solution following criteria. Whenever a solution fails we
trace back to the failure point build on the next solution and continue this process till we find the
solution or all possible solutions are looked after.
4. Searching Algorithm:
Searching algorithms are the ones that are used for searching elements or groups of elements from
a particular data structure. They can be of different types based on their approach or the data
structure in which the element should be found.
5. Sorting Algorithm:
Sorting is arranging a group of data in a particular manner according to the requirement. The
algorithms which help in performing this function are called sorting algorithms. Generally sorting
algorithms are used to sort groups of data in an increasing or decreasing manner.
6. Hashing Algorithm:
Hashing algorithms work similarly to the searching algorithm. But they contain an index with a
key ID. In hashing, a key is assigned to specific data.
7. Divide and Conquer Algorithm:
This algorithm breaks a problem into sub-problems, solves a single sub-problem, and merges the
solutions to get the final solution. It consists of the following three steps:
Divide
Solve
Combine

Prof. Apoorva M S CIT_Mandya Dept of MCA 3


Design and Analysis of an Algorithm-DSC_13

8. Greedy Algorithm:
In this type of algorithm, the solution is built part by part. The solution for the next part is built
based on the immediate benefit of the next part. The one solution that gives the most benefit will
be chosen as the solution for the next part.
9. Dynamic Programming Algorithm: This algorithm uses the concept of using the already found
solution to avoid repetitive calculation of the same part of the problem. It divides the problem into
smaller overlapping subproblems and solves them.
10. Randomized Algorithm:
In the randomized algorithm, we use a random number so it gives immediate benefit. The random
number helps in deciding the expected outcome.
Advantages of Algorithms:
• It is easy to understand.
• An algorithm is a step-wise representation of a solution to a given problem.
• In an Algorithm the problem is broken down into smaller pieces or steps hence, it is
easier for the programmer to convert it into an actual program.
Disadvantages of Algorithms:
• Writing an algorithm takes a long time so it is time-consuming.
• Understanding complex logic through algorithms can be very difficult.
• Branching and Looping statements are difficult to show in Algorithms(imp).
Fundamentals of algorithmic problem solving

(i) Understanding the Problem


• This is the first step in designing of algorithm.
• Read the problem’s description carefully to understand the problem statement completely
And ask questions for clarifying the doubts about the problem.
• Identify the problem types and use existing algorithm to find solution.
• Input (instance) to the problem and range of the input get fixed.

Prof. Apoorva M S CIT_Mandya Dept of MCA 4


Design and Analysis of an Algorithm-DSC_13

(ii) Decision making


The Decision making is done on the following:
a) Ascertaining the Capabilities of the Computational Device:
• In random-access machine (RAM), instructions are executed one after another (The
central assumption is that one operation at a time). Accordingly, algorithms designed to
be executed on such machines are called sequential algorithms. →In some newer
computers, operations are executed concurrently, i.e., in parallel. Algorithms that take
advantage of this capability are called parallel algorithms.
• Choice of computational devices like Processor and memory is mainly based on space
and time efficiency
a) Choosing between Exact and Approximate Problem Solving:
• The next principal decision is to choose between solving the problem exactly or solving it
approximately.
• An algorithm used to solve the problem exactly and produce correct result is called an exact
algorithm.
• If the problem is so complex and not able to get exact solution, then we have to choose an
algorithm called an approximation algorithm. i.e., produces an
• Approximate answer. E.g., extracting square roots, solving nonlinear equations, and
evaluating definite integrals.

Prof. Apoorva M S CIT_Mandya Dept of MCA 5


Design and Analysis of an Algorithm-DSC_13

a) Algorithm Design Techniques


• An algorithm design technique (or “strategy” or “paradigm”) is a general approach to solving
problems algorithmically that is applicable to a variety of problems from different areas of
computing.
Algorithm + Data Structure = programes
• Though Algorithms and Data Structures are independent, but they are combined together to
develop program. Hence the choice of proper data structure is required before designing the
algorithm.
• Implementation of algorithm is possible only with the help of Algorithms and Data Structures
• Algorithmic strategy / technique / paradigm are a general approach by which many problems
can be solved algorithmically. E.g., Brute Force, Divide and Conquer, Dynamic
Programming, Greedy Technique and soon.
(iii) Methods of Specifying an Algorithm
There are three ways to specify an algorithm. They are:
a. Natural language
b. Pseudocode
c. Flowchart
Pseudocode and flowchart are the two options that are most widely used nowadays for specifying
algorithms.
Natural Language:
• It is very simple and easy to specify an algorithm using natural language. But many times
specification of algorithm by using natural language is not clear and thereby we get brief
specification.
Example: An algorithm to perform addition of two numbers.
Step 1: Read the first number, say a.
Step 2: Read the first number, say b.
Step 3: Add the above two numbers and store the result in c.
Step 4: Display the result from c.
Such a specification creates difficulty while actually implementing it. Hence many programmers
prefer to have specification of algorithm by means of Pseudocode.
Pseudocode:
• Pseudocode is a mixture of a natural language and programming language constructs.
Pseudocode is usually more precise than natural language.

Prof. Apoorva M S CIT_Mandya Dept of MCA 6


Design and Analysis of an Algorithm-DSC_13

• For Assignment operation left arrow “←”, for comments two slashes “//”,if condition, for,
while loops are used.
ALGORITHM Sum (a, b)
//Problem Description: This algorithm performs addition of two numbers
//Input: Two integers a and b
//Output: Addition of two integers
c←a+b
return c
This specification is more useful for implementation of any language.
Flowchart
• In the earlier days of computing, the dominant method for specifying algorithms was a
flowchart, this representation technique has proved to be inconvenient.
• Flowchart is a graphical representation of an algorithm. It is a method of expressing an
algorithm by a collection of connected geometric shapes containing descriptions of the
algorithm’s steps
(iv) Proving an Algorithm’s Correctness
• Once an algorithm has been specified then its correctness must be proved.
• An algorithm must yield a required result for every legitimate input in a finite amount of time.
• For Example, the correctness of Euclid’s algorithm for computing the greatest common
divisor stems from the correctness of the equality gcd(m, n) = gcd(n, m mod n).
• A common technique for proving correctness is to use mathematical induction because an
algorithm’s iterations provide a natural sequence of steps needed for such proofs.
• The notion of correctness for approximation algorithms is less straightforward than it is for
exact algorithms. The error produced by the algorithm should not exceed a predefined limit.
(v) Analyzing an Algorithm
For an algorithm the most important is efficiency. In fact, there are two kinds of algorithm efficiency.
They are:
• Time efficiency, indicating how fast the algorithm runs, and Space efficiency, indicating how
much extra memory it uses.
• The efficiency of an algorithm is determined by measuring both time efficiency and space
efficiency So factors to analyze an algorithm are:
1.Time efficiency of an algorithm
2.Space efficiency of an algorithm

Prof. Apoorva M S CIT_Mandya Dept of MCA 7


Design and Analysis of an Algorithm-DSC_13

3.Simplicity of an algorithm
4.Generality of an algorithm
(vi) Coding an Algorithm
• The coding / implementation of an algorithm is done by a suitable programming language
like C, C++,JAVA.
• The transition from an algorithm to a program can be done either incorrectly or very
inefficiently.
• Implementing an algorithm correctly is necessary. The Algorithm power should not reduce
by in efficient implementation.
• Standard tricks like computing a loop’s invariant (an expression that does not change its
value) outside the loop, collecting common subexpressions, replacing expensive operations
by cheap ones, selection of programming language and so on should be known to the
programmer.
• It is very essential to write an optimized code (efficient code) to reduce the burden of
compiler.
Fundamentals of the analysis of algorithm efficiency
Analysis Framework
Time Complexity Analysis:
• Analyze how the running time of the algorithm scales with the input size "n."
• Identify and count the basic operations (e.g., comparisons, assignments) executed by the
algorithm.
• Use Big O notation to express the worst-case time complexity as a function of "n."
• Indicates how fast an algorithm runs.

Space Complexity Analysis:


Analyze how the algorithm's memory usage (space) scales with the input size "n."
Consider data structures, auxiliary variables, and recursion stack space.
Use Big O notation to express the worst-case space complexity as a function of "n."
deals with the extra space the algorithm requires.

Measuring the input size


• An algorithm's efficiency as a function of some parameter n indicating the algorithm's input
size.

Prof. Apoorva M S CIT_Mandya Dept of MCA 8


Design and Analysis of an Algorithm-DSC_13

• In most cases, selecting such a parameter is quite straightforward.


• For example, it will be the size of the list for problems of sorting, searching, finding the list's
smallest element, and most other problems dealing with lists.
For the problem of evaluating a polynomial p(x) = a n x n+ . . . + a 0 of degree n, it will be the
polynomial's degree or the number of its coefficients, which is larger by one than its degree.

There are situations, of course, where the choice of a parameter indicating an input size does matter.
Example - computing the product of two n-by-n matrices.

There are two natural measures of size for this problem.

The matrix order n.


The total number of elements N in the matrices being multiplied.

Since there is a simple formula relating these two measures, we can easily switch from one to the
other, but the answer about an algorithm's efficiency will be qualitatively different depending on
which of the two measures we use.
The choice of an appropriate size metric can be influenced by operations of the algorithm in question.
For example, how should we measure an input's size for a spell-checking algorithm? If the algorithm
examines individual characters of its input, then we should measure the size by the number of
characters; if it works by processing words, we should count their number in the input.

We should make a special note about measuring size of inputs for algorithms involving properties of
numbers (e.g., checking whether a given integer n is prime).

For such algorithms, computer scientists prefer measuring size by the number b of bits in the n's
binary representation:
b=|log2n|+1

This metric usually gives a better idea about efficiency of algorithms in question.
The analysis framework helps algorithm designers and computer scientists make informed decisions
about algorithm selection, optimization, and suitability for different applications. It provides a

Prof. Apoorva M S CIT_Mandya Dept of MCA 9


Design and Analysis of an Algorithm-DSC_13

structured approach to understanding and evaluating the efficiency of algorithms, which is essential
for solving complex computational problems effectively.

UNITS FOR MEASURING RUN TIME:


We can simply use some standard unit of time measurement-a second, a millisecond, and so on-to
measure the running time of a program implementing the algorithm.
There are obvious drawbacks to such an approach. They are
• Dependence on the speed of a particular computer
• Dependence on the quality of a program implementing the algorithm
• The compiler used in generating the machine code
• The difficulty of clocking the actual running time of the program.
Since we are in need to measure algorithm efficiency, we should have a metric that does not depend
on these extraneous factors.
One possible approach is to count the number of times each of the algorithm's operations is executed.
This approach is both difficult and unnecessary
The main objective is to identify the most important operation of the algorithm, called the basic
operation, the operation contributing the most to the total running time, and compute the number of
times the basic operation is executed

Orders of growth
• In the design and analysis of algorithms, the concept of "order of growth" refers to the
classification of an algorithm's time complexity or space complexity in terms of its input size.
• It helps us understand how the performance of an algorithm scales as the size of the input
data increases.
• The most common notations used to describe the order of growth are Big O notation, Omega
notation, and Theta notation. Here's a brief explanation of each:

Big O Notation (O-notation):


• Big O notation provides an upper bound on the growth rate of an algorithm's resource usage
(usually time or space). It represents the worst-case scenario.
• O(f(n)) describes an upper bound on the time or space complexity of an algorithm in terms
of a mathematical function f(n), where n is the size of the input.

Prof. Apoorva M S CIT_Mandya Dept of MCA 10


Design and Analysis of an Algorithm-DSC_13

• For example, if an algorithm has a time complexity of O(n^2), it means that its running time
grows quadratically with the input size.
Omega Notation (Ω-notation):
• Omega notation provides a lower bound on the growth rate of an algorithm's resource usage.
It represents the best-case scenario.
• Ω(f(n)) describes a lower bound on the time or space complexity of an algorithm in terms of
a mathematical function f(n).
• For example, if an algorithm has a time complexity of Ω(n), it means that its running time
grows linearly with the input size in the best-case scenario.
Theta Notation (Θ-notation):
• Theta notation provides both upper and lower bounds on the growth rate of an algorithm's
resource usage, indicating a tight bound on its complexity.
• Θ(f(n)) describes an algorithm whose time or space complexity matches the function f(n)
asymptotically, meaning it neither grows faster nor slower.
• For example, if an algorithm has a time complexity of Θ(n), it means that its running time is
linear with respect to the input size.
These notations are crucial for comparing and analyzing different algorithms and determining their
efficiency in solving specific problems. The choice of algorithm depends on the problem
requirements and the size of the input data, with the goal of selecting an algorithm with the most
favourable order of growth for the given problem.

BEST,WORST,AVERAGE CASE ANALYSIS

Best, worst, and average case analyses are methods used to analyze the time complexity of
algorithms. They help us understand how an algorithm's performance varies under different input
scenarios.
Best Case Analysis:
• The best-case time complexity represents the minimum amount of time an algorithm will take
for a given input.
• It assumes that the input is in the most favorable condition for the algorithm.
• In best-case analysis, you are essentially trying to find the lower bound on the algorithm's
time complexity.

Prof. Apoorva M S CIT_Mandya Dept of MCA 11


Design and Analysis of an Algorithm-DSC_13

• Best-case time complexity is not always a realistic measure because it often assumes ideal
conditions, and real-world inputs may not always match these conditions.
• Example:
Worst Case Analysis:
• The worst-case time complexity represents the maximum amount of time an algorithm will
take for any given input.
• It assumes that the input is in the most unfavourable condition for the algorithm.
• Worst-case analysis is generally considered a more practical measure since it helps ensure
that an algorithm won't perform poorly under any input conditions.
• The worst-case time complexity is often used in critical applications where predictable
performance is crucial.
• Example:

Average Case Analysis:


• The average-case time complexity represents the expected time an algorithm will take for a
random distribution of inputs.
• It considers all possible inputs and their probabilities.
• Average-case analysis is often a more accurate reflection of real-world performance, but it
can be more challenging to analyze because it requires knowledge of the probability
distribution of inputs.
• To perform average-case analysis, you might use techniques from probability theory or
conduct empirical testing with various input data.
• Example:

Time and Space Complexity (Efficiency):


Space complexity:
• Space complexity is a combination of auxiliary space and input space. Where auxiliary
space is the extra space or buffer space that will be used by an algorithm during execution.
Also, we know that space complexity is all about memory.
• Below are a few points on how the memory is used during execution.
• Instruction space: the compiled version of instructions are stored in memory. This
memory is called instruction space

Prof. Apoorva M S CIT_Mandya Dept of MCA 12


Design and Analysis of an Algorithm-DSC_13

• Environmental stack: We use environmental stacks in cases where a


function is called inside another function. For example, a function cat() is
called inside the function animals(). Then the variables of function animals()
will be stored temporarily in a system stack when function cat() is being
executed inside the function animals().
• Data Space: The variables and constants that we use in the algorithm will
also require some space which is referred to as data space. In most of the
cases, we neglect environmental stack and instruction space. Whereas, data
space is always considered.
How to calculate space complexity?
To calculate the space complexity we need to have an idea about the value of
the memory of each data type. This value will vary from one operating system
to another. However, the method used to calculate the space complexity remains
the same. Let’s have a look into a few examples to understand how to calculate
the space complexity.
Example 1:
{
int a,b,c,sum;
sum = a + b + c;
return(sum);
}
Here, in this example, we have 4 variables which are a, b, c, and sum. All these
variables are of type int hence, each of them require 4 bytes of memory. Total
requirement of memory would be : ((4*4)+4) => 20 bytes.
Time complexity:
Time complexity is most commonly evaluated by considering the number of
elementary steps required to complete the execution of an algorithm. Also, many
times we make use of the big O notation which is an asymptotic notation while
representing the time complexity of an algorithm.

Prof. Apoorva M S CIT_Mandya Dept of MCA 13


Design and Analysis of an Algorithm-DSC_13

Quadratic time complexity:


Example:
for(i=0; i < N; i++)
{
for(j=0; j < N;j++)
{
statement;
}
} Here, in this example we have a loop inside another loop. In such a case, the
complexity would be proportional to n square. Hence, it would be of type
quadratic and therefore the complexity is quadratic complexity.

Prof. Apoorva M S CIT_Mandya Dept of MCA 14

You might also like