Bca Daa 01
Bca Daa 01
• Well-Defined Outputs: The algorithm must clearly define what output will be yielded and
it should be well-defined as well. It should produce at least 1 output.
• Finiteness: The algorithm must be finite, i.e. it should terminate after a finite time.
• Feasible: The algorithm must be simple, generic, and practical, such that it can be executed
with the available resources. It must not contain some future technology or anything.
• Language Independent: The Algorithm designed must be languageindependent, i.e. it
must be just plain instructions that can be implemented in any language, and yet the output
will be the same, as expected.
• Input: An algorithm has zero or more inputs. Each that contains a fundamental operator
must accept zero or more inputs.
• Output: An algorithm produces at least one output. Every instruction that contains a
fundamental operator must accept zero or more inputs.
• Definiteness: All instructions in an algorithm must be unambiguous, precise, and easy to
interpret. By referring to any of the instructions in an algorithm one can clearly understand
what is to be done. Every fundamental operator in instruction must be defined without any
ambiguity.
• Finiteness: An algorithm must terminate after a finite number of steps in all test cases.
Every instruction which contains a fundamental operator must be terminated within a finite
amount of time. Infinite loops or recursive functions without base conditions do not possess
finiteness.
• Effectiveness: An algorithm must be developed by using very basic, simple, and feasible
operations so that one can trace it out by using just paper and pencil.
Properties of Algorithm:
• It should terminate after a finite time.
• It should produce at least one output.
• It should take zero or more input.
• It should be deterministic means giving the same output for the same input case.
• Every step in the algorithm must be effective i.e. every step should do some work.
Types of Algorithms:
There are several types of algorithms available. Some important algorithms are:
1. Brute Force Algorithm:
It is the simplest approach to a problem. A brute force algorithm is the first approach that comes
to finding when we see a problem.
2. Recursive Algorithm:
A recursive algorithm is based on recursion. In this case, a problem is broken into several sub-parts
and called the same function again and again.
3. Backtracking Algorithm:
The backtracking algorithm builds the solution by searching among all possible solutions. Using
this algorithm, we keep on building the solution following criteria. Whenever a solution fails we
trace back to the failure point build on the next solution and continue this process till we find the
solution or all possible solutions are looked after.
4. Searching Algorithm:
Searching algorithms are the ones that are used for searching elements or groups of elements from
a particular data structure. They can be of different types based on their approach or the data
structure in which the element should be found.
5. Sorting Algorithm:
Sorting is arranging a group of data in a particular manner according to the requirement. The
algorithms which help in performing this function are called sorting algorithms. Generally sorting
algorithms are used to sort groups of data in an increasing or decreasing manner.
6. Hashing Algorithm:
Hashing algorithms work similarly to the searching algorithm. But they contain an index with a
key ID. In hashing, a key is assigned to specific data.
7. Divide and Conquer Algorithm:
This algorithm breaks a problem into sub-problems, solves a single sub-problem, and merges the
solutions to get the final solution. It consists of the following three steps:
Divide
Solve
Combine
8. Greedy Algorithm:
In this type of algorithm, the solution is built part by part. The solution for the next part is built
based on the immediate benefit of the next part. The one solution that gives the most benefit will
be chosen as the solution for the next part.
9. Dynamic Programming Algorithm: This algorithm uses the concept of using the already found
solution to avoid repetitive calculation of the same part of the problem. It divides the problem into
smaller overlapping subproblems and solves them.
10. Randomized Algorithm:
In the randomized algorithm, we use a random number so it gives immediate benefit. The random
number helps in deciding the expected outcome.
Advantages of Algorithms:
• It is easy to understand.
• An algorithm is a step-wise representation of a solution to a given problem.
• In an Algorithm the problem is broken down into smaller pieces or steps hence, it is
easier for the programmer to convert it into an actual program.
Disadvantages of Algorithms:
• Writing an algorithm takes a long time so it is time-consuming.
• Understanding complex logic through algorithms can be very difficult.
• Branching and Looping statements are difficult to show in Algorithms(imp).
Fundamentals of algorithmic problem solving
• For Assignment operation left arrow “←”, for comments two slashes “//”,if condition, for,
while loops are used.
ALGORITHM Sum (a, b)
//Problem Description: This algorithm performs addition of two numbers
//Input: Two integers a and b
//Output: Addition of two integers
c←a+b
return c
This specification is more useful for implementation of any language.
Flowchart
• In the earlier days of computing, the dominant method for specifying algorithms was a
flowchart, this representation technique has proved to be inconvenient.
• Flowchart is a graphical representation of an algorithm. It is a method of expressing an
algorithm by a collection of connected geometric shapes containing descriptions of the
algorithm’s steps
(iv) Proving an Algorithm’s Correctness
• Once an algorithm has been specified then its correctness must be proved.
• An algorithm must yield a required result for every legitimate input in a finite amount of time.
• For Example, the correctness of Euclid’s algorithm for computing the greatest common
divisor stems from the correctness of the equality gcd(m, n) = gcd(n, m mod n).
• A common technique for proving correctness is to use mathematical induction because an
algorithm’s iterations provide a natural sequence of steps needed for such proofs.
• The notion of correctness for approximation algorithms is less straightforward than it is for
exact algorithms. The error produced by the algorithm should not exceed a predefined limit.
(v) Analyzing an Algorithm
For an algorithm the most important is efficiency. In fact, there are two kinds of algorithm efficiency.
They are:
• Time efficiency, indicating how fast the algorithm runs, and Space efficiency, indicating how
much extra memory it uses.
• The efficiency of an algorithm is determined by measuring both time efficiency and space
efficiency So factors to analyze an algorithm are:
1.Time efficiency of an algorithm
2.Space efficiency of an algorithm
3.Simplicity of an algorithm
4.Generality of an algorithm
(vi) Coding an Algorithm
• The coding / implementation of an algorithm is done by a suitable programming language
like C, C++,JAVA.
• The transition from an algorithm to a program can be done either incorrectly or very
inefficiently.
• Implementing an algorithm correctly is necessary. The Algorithm power should not reduce
by in efficient implementation.
• Standard tricks like computing a loop’s invariant (an expression that does not change its
value) outside the loop, collecting common subexpressions, replacing expensive operations
by cheap ones, selection of programming language and so on should be known to the
programmer.
• It is very essential to write an optimized code (efficient code) to reduce the burden of
compiler.
Fundamentals of the analysis of algorithm efficiency
Analysis Framework
Time Complexity Analysis:
• Analyze how the running time of the algorithm scales with the input size "n."
• Identify and count the basic operations (e.g., comparisons, assignments) executed by the
algorithm.
• Use Big O notation to express the worst-case time complexity as a function of "n."
• Indicates how fast an algorithm runs.
There are situations, of course, where the choice of a parameter indicating an input size does matter.
Example - computing the product of two n-by-n matrices.
Since there is a simple formula relating these two measures, we can easily switch from one to the
other, but the answer about an algorithm's efficiency will be qualitatively different depending on
which of the two measures we use.
The choice of an appropriate size metric can be influenced by operations of the algorithm in question.
For example, how should we measure an input's size for a spell-checking algorithm? If the algorithm
examines individual characters of its input, then we should measure the size by the number of
characters; if it works by processing words, we should count their number in the input.
We should make a special note about measuring size of inputs for algorithms involving properties of
numbers (e.g., checking whether a given integer n is prime).
For such algorithms, computer scientists prefer measuring size by the number b of bits in the n's
binary representation:
b=|log2n|+1
This metric usually gives a better idea about efficiency of algorithms in question.
The analysis framework helps algorithm designers and computer scientists make informed decisions
about algorithm selection, optimization, and suitability for different applications. It provides a
structured approach to understanding and evaluating the efficiency of algorithms, which is essential
for solving complex computational problems effectively.
Orders of growth
• In the design and analysis of algorithms, the concept of "order of growth" refers to the
classification of an algorithm's time complexity or space complexity in terms of its input size.
• It helps us understand how the performance of an algorithm scales as the size of the input
data increases.
• The most common notations used to describe the order of growth are Big O notation, Omega
notation, and Theta notation. Here's a brief explanation of each:
• For example, if an algorithm has a time complexity of O(n^2), it means that its running time
grows quadratically with the input size.
Omega Notation (Ω-notation):
• Omega notation provides a lower bound on the growth rate of an algorithm's resource usage.
It represents the best-case scenario.
• Ω(f(n)) describes a lower bound on the time or space complexity of an algorithm in terms of
a mathematical function f(n).
• For example, if an algorithm has a time complexity of Ω(n), it means that its running time
grows linearly with the input size in the best-case scenario.
Theta Notation (Θ-notation):
• Theta notation provides both upper and lower bounds on the growth rate of an algorithm's
resource usage, indicating a tight bound on its complexity.
• Θ(f(n)) describes an algorithm whose time or space complexity matches the function f(n)
asymptotically, meaning it neither grows faster nor slower.
• For example, if an algorithm has a time complexity of Θ(n), it means that its running time is
linear with respect to the input size.
These notations are crucial for comparing and analyzing different algorithms and determining their
efficiency in solving specific problems. The choice of algorithm depends on the problem
requirements and the size of the input data, with the goal of selecting an algorithm with the most
favourable order of growth for the given problem.
Best, worst, and average case analyses are methods used to analyze the time complexity of
algorithms. They help us understand how an algorithm's performance varies under different input
scenarios.
Best Case Analysis:
• The best-case time complexity represents the minimum amount of time an algorithm will take
for a given input.
• It assumes that the input is in the most favorable condition for the algorithm.
• In best-case analysis, you are essentially trying to find the lower bound on the algorithm's
time complexity.
• Best-case time complexity is not always a realistic measure because it often assumes ideal
conditions, and real-world inputs may not always match these conditions.
• Example:
Worst Case Analysis:
• The worst-case time complexity represents the maximum amount of time an algorithm will
take for any given input.
• It assumes that the input is in the most unfavourable condition for the algorithm.
• Worst-case analysis is generally considered a more practical measure since it helps ensure
that an algorithm won't perform poorly under any input conditions.
• The worst-case time complexity is often used in critical applications where predictable
performance is crucial.
• Example: