Definition of Algorithm
The word Algorithm means ” A set of finite rules or instructions to be followed in
calculations or other problem-solving operations ”
Or
” A procedure for solving a mathematical problem in a finite number of steps that
frequently involves recursive operations”.
Therefore Algorithm refers to a sequence of finite steps to solve a particular
problem.
Use of the Algorithms:
Algorithms play a crucial role in various fields and have many applications. Some
of the key areas where algorithms are used include:
1. Computer Science: Algorithms form the basis of computer programming and
are used to solve problems ranging from simple sorting and searching to
complex tasks such as artificial intelligence and machine learning.
2. Mathematics: Algorithms are used to solve mathematical problems, such as
finding the optimal solution to a system of linear equations or finding the
shortest path in a graph.
3. Operations Research: Algorithms are used to optimize and make decisions in
fields such as transportation, logistics, and resource allocation.
4. Artificial Intelligence: Algorithms are the foundation of artificial intelligence
and machine learning, and are used to develop intelligent systems that can
perform tasks such as image recognition, natural language processing, and
decision-making.
5. Data Science: Algorithms are used to analyze, process, and extract insights
from large amounts of data in fields such as marketing, finance, and healthcare.
These are just a few examples of the many applications of algorithms. The use of
algorithms is continually expanding as new technologies and fields emerge,
making it a vital component of modern society.
Algorithms can be simple and complex depending on what you want to
achieve.
It can be understood by taking the example of cooking a new recipe. To cook a
new recipe, one reads the instructions and steps and executes them one by one, in
the given sequence. The result thus obtained is the new dish is cooked perfectly.
Every time you use your phone, computer, laptop, or calculator you are using
Algorithms. Similarly, algorithms help to do a task in programming to get the
expected output.
The Algorithm designed are language-independent, i.e. they are just plain
instructions that can be implemented in any language, and yet the output will be
the same, as expected.
What is the need for algorithms?
1. Algorithms are necessary for solving complex problems efficiently and
effectively.
2. They help to automate processes and make them more reliable, faster, and
easier to perform.
3. Algorithms also enable computers to perform tasks that would be difficult or
impossible for humans to do manually.
4. They are used in various fields such as mathematics, computer science,
engineering, finance, and many others to optimize processes, analyze data,
make predictions, and provide solutions to problems.
What are the Characteristics of an Algorithm?
As one would not follow any written instructions to cook the recipe, but only the
standard one. Similarly, not all written instructions for programming are an
algorithm. For some instructions to be an algorithm, it must have the following
characteristics:
Clear and Unambiguous: The algorithm should be unambiguous. Each of its
steps should be clear in all aspects and must lead to only one meaning.
Well-Defined Inputs: If an algorithm says to take inputs, it should be well-
defined inputs. It may or may not take input.
Well-Defined Outputs: The algorithm must clearly define what output will be
yielded and it should be well-defined as well. It should produce at least 1 output.
Finite-ness: The algorithm must be finite, i.e. it should terminate after a finite
time.
Feasible: The algorithm must be simple, generic, and practical, such that it can
be executed with the available resources. It must not contain some future
technology or anything.
Language Independent: The Algorithm designed must be language-
independent, i.e. it must be just plain instructions that can be implemented in
any language, and yet the output will be the same, as expected.
Input: An algorithm has zero or more inputs. Each that contains a fundamental
operator must accept zero or more inputs.
Output: An algorithm produces at least one output. Every instruction that
contains a fundamental operator must accept zero or more inputs.
Definiteness: All instructions in an algorithm must be unambiguous, precise,
and easy to interpret. By referring to any of the instructions in an algorithm one
can clearly understand what is to be done. Every fundamental operator in
instruction must be defined without any ambiguity.
Finiteness: An algorithm must terminate after a finite number of steps in all test
cases. Every instruction which contains a fundamental operator must be
terminated within a finite amount of time. Infinite loops or recursive functions
without base conditions do not possess finiteness.
Effectiveness: An algorithm must be developed by using very basic, simple,
and feasible operations so that one can trace it out by using just paper and
pencil.
Properties of Algorithm:
It should terminate after a finite time.
It should produce at least one output.
It should take zero or more input.
It should be deterministic means giving the same output for the same input
case.
Every step in the algorithm must be effective i.e. every step should do some
work.
Types of Algorithms:
There are several types of algorithms available. Some important algorithms are:
1. Brute Force Algorithm:
It is the simplest approach to a problem. A brute force algorithm is the first
approach that comes to finding when we see a problem.
2. Recursive Algorithm:
A recursive algorithm is based on recursion. In this case, a problem is broken into
several sub-parts and called the same function again and again.
3. Backtracking Algorithm:
The backtracking algorithm builds the solution by searching among all possible
solutions. Using this algorithm, we keep on building the solution following criteria.
Whenever a solution fails we trace back to the failure point build on the next
solution and continue this process till we find the solution or all possible solutions
are looked after.
4. Searching Algorithm:
Searching algorithms are the ones that are used for searching elements or groups
of elements from a particular data structure. They can be of different types based
on their approach or the data structure in which the element should be found.
5. Sorting Algorithm:
Sorting is arranging a group of data in a particular manner according to the
requirement. The algorithms which help in performing this function are called
sorting algorithms. Generally sorting algorithms are used to sort groups of data in
an increasing or decreasing manner.
6. Hashing Algorithm:
Hashing algorithms work similarly to the searching algorithm. But they contain an
index with a key ID. In hashing, a key is assigned to specific data.
7. Divide and Conquer Algorithm :
This algorithm breaks a problem into sub-problems, solves a single sub-problem,
and merges the solutions to get the final solution. It consists of the following three
steps:
Divide
Solve
Combine
8. Greedy Algorithm:
In this type of algorithm, the solution is built part by part. The solution for the next
part is built based on the immediate benefit of the next part. The one solution that
gives the most benefit will be chosen as the solution for the next part.
9. Dynamic Programming Algorithm :
This algorithm uses the concept of using the already found solution to avoid
repetitive calculation of the same part of the problem. It divides the problem into
smaller overlapping subproblems and solves them.
10. Randomized Algorithm:
In the randomized algorithm, we use a random number so it gives immediate
benefit. The random number helps in deciding the expected outcome.
To learn more about the types of algorithms refer to the article about “Types of
Algorithms“.
Advantages of Algorithms:
It is easy to understand.
An algorithm is a step-wise representation of a solution to a given problem.
In an Algorithm the problem is broken down into smaller pieces or steps hence,
it is easier for the programmer to convert it into an actual program.
Disadvantages of Algorithms :
Writing an algorithm takes a long time so it is time-consuming.
Understanding complex logic through algorithms can be very difficult.
Branching and Looping statements are difficult to show in Algorithms(imp).
How to Design an Algorithm?
To write an algorithm, the following things are needed as a pre-requisite:
1. The problem that is to be solved by this algorithm i.e. clear problem definition.
2. The constraints of the problem must be considered while solving the problem.
3. The input to be taken to solve the problem.
4. The output is to be expected when the problem is solved.
5. The solution to this problem is within the given constraints.
Then the algorithm is written with the help of the above parameters such that it
solves the problem.
Example: Consider the example to add three numbers and print the sum.
Step 1: Fulfilling the pre-requisites
As discussed above, to write an algorithm, its prerequisites must be fulfilled.
1. The problem that is to be solved by this algorithm: Add 3 numbers and print
their sum.
2. The constraints of the problem that must be considered while solving the
problem: The numbers must contain only digits and no other characters.
3. The input to be taken to solve the problem: The three numbers to be added.
4. The output to be expected when the problem is solved: The sum of the
three numbers taken as the input i.e. a single integer value.
5. The solution to this problem, in the given constraints: The solution consists
of adding the 3 numbers. It can be done with the help of the ‘+’ operator, or bit-
wise, or any other method.
Step 2: Designing the algorithm
Now let’s design the algorithm with the help of the above pre-requisites:
Algorithm to add 3 numbers and print their sum:
1. START
2. Declare 3 integer variables num1, num2, and num3.
3. Take the three numbers, to be added, as inputs in variables num1, num2, and
num3 respectively.
4. Declare an integer variable sum to store the resultant sum of the 3 numbers.
5. Add the 3 numbers and store the result in the variable sum.
6. Print the value of the variable sum
7. END
Step 3: Testing the algorithm by implementing it.
Here is the step-by-step algorithm of the code:
1. Declare three variables num1, num2, and num3 to store the three numbers to
be added.
2. Declare a variable sum to store the sum of the three numbers.
3. Use the cout statement to prompt the user to enter the first number.
4. Use the cin statement to read the first number and store it in num1.
5. Use the cout statement to prompt the user to enter the second number.
6. Use the cin statement to read the second number and store it in num2.
7. Use the cout statement to prompt the user to enter the third number.
8. Use the cin statement to read and store the third number in num3.
9. Calculate the sum of the three numbers using the + operator and store it in the
sum variable.
10.Use the cout statement to print the sum of the three numbers.
11.The main function returns 0, which indicates the successful execution of the
program.
Time complexity: O(1)
Auxiliary Space: O(1)
One problem, many solutions: The solution to an algorithm can be or cannot be
more than one. It means that while implementing the algorithm, there can be more
than one method to implement it. For example, in the above problem of adding 3
numbers, the sum can be calculated in many ways:
+ operator
Bit-wise operators
. . etc
How to analyze an Algorithm?
For a standard algorithm to be good, it must be efficient. Hence the efficiency of an
algorithm must be checked and maintained. It can be in two stages:
1. Priori Analysis:
“Priori” means “before”. Hence Priori analysis means checking the algorithm before
its implementation. In this, the algorithm is checked when it is written in the form of
theoretical steps. This Efficiency of an algorithm is measured by assuming that all
other factors, for example, processor speed, are constant and have no effect on
the implementation. This is done usually by the algorithm designer. This analysis is
independent of the type of hardware and language of the compiler. It gives the
approximate answers for the complexity of the program.
2. Posterior Analysis:
“Posterior” means “after”. Hence Posterior analysis means checking the algorithm
after its implementation. In this, the algorithm is checked by implementing it in any
programming language and executing it. This analysis helps to get the actual and
real analysis report about correctness(for every possible input/s if it shows/returns
correct output or not), space required, time consumed, etc. That is, it is dependent
on the language of the compiler and the type of hardware used.
What is Algorithm complexity and how to find it?
An algorithm is defined as complex based on the amount of Space and Time it
consumes. Hence the Complexity of an algorithm refers to the measure of the time
that it will need to execute and get the expected output, and the Space it will need
to store all the data (input, temporary data, and output). Hence these two factors
define the efficiency of an algorithm.
The two factors of Algorithm Complexity are:
Time Factor: Time is measured by counting the number of key operations such
as comparisons in the sorting algorithm.
Space Factor: Space is measured by counting the maximum memory space
required by the algorithm to run/execute.
Therefore the complexity of an algorithm can be divided into two types :
1. Space Complexity: The space complexity of an algorithm refers to the amount
of memory required by the algorithm to store the variables and get the result. This
can be for inputs, temporary operations, or outputs.
How to calculate Space Complexity?
The space complexity of an algorithm is calculated by determining the following 2
components:
Fixed Part: This refers to the space that is required by the algorithm. For
example, input variables, output variables, program size, etc.
Variable Part: This refers to the space that can be different based on the
implementation of the algorithm. For example, temporary variables, dynamic
memory allocation, recursion stack space, etc.
Therefore Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where
C is the fixed part and S(I) is the variable part of the algorithm, which depends
on instance characteristic I.
Example: Consider the below algorithm for Linear Search
Step 1: START
Step 2: Get n elements of the array in arr and the number to be searched in x
Step 3: Start from the leftmost element of arr[] and one by one compare x with
each element of arr[]
Step 4: If x matches with an element, Print True.
Step 5: If x doesn’t match with any of the elements, Print False.
Step 6: END
Here, There are 2 variables arr[], and x, where the arr[] is the variable part of n
elements and x is the fixed part. Hence S(P) = 1+n. So, the space complexity
depends on n(number of elements). Now, space depends on data types of given
variables and constant types and it will be multiplied accordingly.
2. Time Complexity: The time complexity of an algorithm refers to the amount of
time required by the algorithm to execute and get the result. This can be for normal
operations, conditional if-else statements, loop statements, etc.
How to Calculate, Time Complexity?
The time complexity of an algorithm is also calculated by determining the following
2 components:
Constant time part: Any instruction that is executed just once comes in this
part. For example, input, output, if-else, switch, arithmetic operations, etc.
Variable Time Part: Any instruction that is executed more than once, say n
times, comes in this part. For example, loops, recursion, etc.
Therefore Time complexity �(�) T(P) of any algorithm P is T(P) = C
+ TP(I), where C is the constant time part and TP(I) is the variable part of the
algorithm, which depends on the instance characteristic I.
Example: In the algorithm of Linear Search above, the time complexity is
calculated as follows:
Step 1: –Constant Time
Step 2: — Variable Time (Taking n inputs)
Step 3: –Variable Time (Till the length of the Array (n) or the index of the found
element)
Step 4: –Constant Time
Step 5: –Constant Time
Step 6: –Constant Time
Hence, T(P) = 1 + n + n(1 + 1) + 1 = 2 + 3n, which can be said as T(n).
How to express an Algorithm?
1. Natural Language:- Here we express the Algorithm in the natural English
language. It is too hard to understand the algorithm from it.
2. Flowchart:- Here we express the Algorithm by making a graphical/pictorial
representation of it. It is easier to understand than Natural Language.
3. Pseudo Code:- Here we express the Algorithm in the form of annotations and
informative text written in plain English which is very much similar to the real
code but as it has no syntax like any of the programming languages, it can’t be
compiled or interpreted by the computer. It is the best way to express an
algorithm because it can be understood by even a layman with some school-
level knowledge.
Why is Performance of Algorithms Important ? There are many important
things that should be taken care of, like user-friendliness, modularity, security,
maintainability, etc. Why worry about performance? The answer to this is simple,
we can have all the above things only if we have performance. So performance is
like currency through which we can buy all the above things. Another reason for
studying performance is – speed is fun! To summarize, performance == scale.
Imagine a text editor that can load 1000 pages, but can spell check 1 page per
minute OR an image editor that takes 1 hour to rotate your image 90 degrees left
OR … you get it. If a software feature can not cope with the scale of tasks users
need to perform – it is as good as dead.
Algorithm analysis is an important part of computational complexity theory, which
provides theoretical estimation for the required resources of an algorithm to solve a
specific computational problem. Analysis of algorithms is the determination of the
amount of time and space resources required to execute it.
Why Analysis of Algorithms is important?
To predict the behavior of an algorithm for large inputs (Scalable Software).
It is much more convenient to have simple measures for the efficiency of an
algorithm than to implement the algorithm and test the efficiency every time a
certain parameter in the underlying computer system changes.
More importantly, by analyzing different algorithms, we can compare them to
determine the best one for our purpose.
Asymptotic Anal
Given two algorithms for a task, how do we find out which one is
better?
One naive way of doing this is – to implement both the algorithms and run
the two programs on your computer for different inputs and see which
one takes less time. There are many problems with this approach for the
analysis of algorithms.
It might be possible that for some inputs, the first algorithm performs
better than the second. And for some inputs second performs better.
It might also be possible that for some inputs, the first algorithm
performs better on one machine, and the second works better on
another machine for some other inputs.
Asymptotic Analysis is the big idea that handles the above issues in
analyzing algorithms. In Asymptotic Analysis, we evaluate the
performance of an algorithm in terms of input size (we don’t measure the
actual running time). We calculate, order of growth of time taken (or
space) by an algorithm in terms of input size. For example linear search
grows linearly and Binary Search grows logarithmically in terms of input
size.
For example, let us consider the search problem (searching a given
item) in a sorted array.
The solution to above search problem includes:
Linear Search (order of growth is linear)
Binary Search (order of growth is logarithmic).
To understand how Asymptotic Analysis solves the problems mentioned
above in analyzing algorithms,
let us say:
o We run the Linear Search on a fast computer A and
o Binary Search on a slow computer B and
For small values of input array size n, the fast computer may take less
time.
But, after a certain value of input array size, the Binary Search will
definitely start taking less time compared to the Linear Search even
though the Binary Search is being run on a slow machine. Why? After
certain value, the machine specific factors would not matter as the
value of input would become large.
The reason is the order of growth of Binary Search with respect to input
size is logarithmic while the order of growth of Linear Search is linear.
So the machine-dependent constants can always be ignored
after a certain value of input size.
Let’s say the constant for machine A is 0.2 and the constant for B is
1000 which means that A is 5000 times more powerful than B.
Input
Size Running time on A Running time on B
10 2 sec ~1h
100 20 sec ~ 1.8 h
10^6 ~ 55.5 h ~ 5.5 h
10^9 ~ 6.3 years ~ 8.3 h
Running times for this example:
Linear Search running time in seconds on A: 0.2 * n
Binary Search running time in seconds on B: 1000*log(n)
Does Asymptotic Analysis always work?
Asymptotic Analysis is not perfect, but that’s the best way available for
analyzing algorithms. For example, say there are two sorting algorithms
that take 1000nLogn and 2nLogn time respectively on a machine. Both of
these algorithms are asymptotically the same (order of growth is nLogn).
So, With Asymptotic Analysis, we can’t judge which one is better as we
ignore constants in Asymptotic Analysis. For example,
asymptotically Heap Sort is better than Quick Sort, but Quick Sort takes
less time in practice.
Also, in Asymptotic analysis, we always talk about input sizes larger than
a constant value. It might be possible that those large inputs are never
given to your software and an asymptotically slower algorithm always
performs better for your particular situation. So, you may end up choosing
an algorithm that is Asymptotically slower but faster for your software
Worst, Average and Best Case Analysis of
Algorithm
In the previous post, we discussed how Asymptotic analysis overcomes
the problems of the naive way of analyzing algorithms. Now let us learn
about What is Worst, Average, and Best cases of an algorithm:
1. Worst Case Analysis (Mostly used)
In the worst-case analysis, we calculate the upper bound on the
running time of an algorithm. We must know the case that causes a
maximum number of operations to be executed.
For Linear Search, the worst case happens when the element to be
searched (x) is not present in the array. When x is not present, the
search() function compares it with all the elements of arr[] one by one.
This is the most commonly used analysis of algorithms (We will be
discussing below why). Most of the time we consider the case that
causes maximum operations.
2. Best Case Analysis (Very Rarely used)
In the best-case analysis, we calculate the lower bound on the running
time of an algorithm. We must know the case that causes a minimum
number of operations to be executed.
For linear search, the best case occurs when x is present at the first
location. The number of operations in the best case is constant (not
dependent on n). So the order of growth of time taken in terms of input
size is constant.
3. Average Case Analysis (Rarely used)
In average case analysis, we take all possible inputs and calculate the
computing time for all of the inputs. Sum all the calculated values and
divide the sum by the total number of inputs.
We must know (or predict) the distribution of cases. For the linear
search problem, let us assume that all cases are uniformly
distributed (including the case of x not being present in the array). So
we sum all the cases and divide the sum by (n+1). We take (n+1) to
consider the case when the element is not present.
Why is Worst Case Analysis Mostly Used?
Average Case : The average case analysis is not easy to do in most
practical cases and it is rarely done. In the average case analysis, we
need to consider every input, its frequency and time taken by it which
may not be possible in many scenarios
Best Case : The Best Case analysis is considered bogus. Guaranteeing a
lower bound on an algorithm doesn’t provide any information as in the
worst case, an algorithm may take years to run.
Worst Case: This is easier than average case and gives an upper bound
which is useful information to analyze software products.
Interesting information about asymptotic notations:
A) For some algorithms, all the cases (worst, best, average) are
asymptotically the same. i.e., there are no worst and best cases.
Example: Merge Sort does order of n log(n) operations in all cases.
B) Where as most of the other sorting algorithms have worst and best
cases.
Example 1: In the typical implementation of Quick Sort (where pivot is
chosen as a corner element), the worst occurs when the input array is
already sorted and the best occurs when the pivot elements always
divide the array into two halves.
Example 2: For insertion sort, the worst case occurs when the array is
reverse sorted and the best case occurs when the array is sorted in the
same order as output.
Examples with their complexity analysis:
1. Linear search algorithm:
C++CJavaPythonC#JavaScript
#include <iostream>
2
#include <vector>
3
using namespace std;
4
5
// Linearly search target in arr.
6
// If target is present, return the index;
7
// otherwise, return -1
8
int search(vector<int>& arr, int x) {
9
for (int i = 0; i < arr.size(); i++) {
10
if (arr[i] == x)
11
return i;
12
}
13
return -1;
14
}
15
16
int main() {
17
vector<int> arr = {1, 10, 30, 15};
18
int x = 30;
19
cout << search(arr, x);
20
return 0;
21
Output
2
Best Case: Constant Time irrespective of input size. This will take place if
the element to be searched is on the first index of the given list. So, the
number of comparisons, in this case, is 1.
Average Case: Linear Time, This will take place if the element to be
searched is at the middle index (in an average search) of the given list.
Worst Case: The element to be searched is not present in the list
2. Special Array Sum : In this example, we will take an array of length
(n) and deals with the following cases :
If (n) is even then our output will be 0
If (n) is odd then our output will be the sum of the elements of the
array.
Below is the implementation of the given problem:
C++CJavaPythonC#JavaScriptPHP
#include <iostream>
2
#include <vector>
3
using namespace std;
4
int getSum(const vector<int>& arr1) {
6
int n = arr1.size();
7
if (n % 2 == 0) // (n) is even
8
return 0;
9
10
int sum = 0;
11
for (int i = 0; i < n; i++) {
12
sum += arr1[i];
13
}
14
return sum; // (n) is odd
15
}
16
17
int main() {
18
19
// Declaring two vectors, one with an odd length
20
// and the other with an even length
21
vector<int> arr1 = {1, 2, 3, 4};
22
vector<int> arr2 = {1, 2, 3, 4, 5};
23
cout << getSum(arr1) << endl;
24
cout << getSum(arr2) << endl;
25
return 0;
26
Output
0
15
Time Complexity Analysis:
Best Case: The order of growth will be constant because in the best
case we are assuming that (n) is even.
Average Case: In this case, we will assume that even and odd are
equally likely, therefore Order of growth will be linear
Worst Case: The order of growth will be linear because in this case,
we are assuming that (n) is always odd.
types of Asymptotic Notations in Complexity
Analysis of Algorithms
Last Updated : 13 Jul, 2024
We have discussed Asymptotic Analysis, and Worst, Average, and Best
Cases of Algorithms. The main idea of asymptotic analysis is to have a
measure of the efficiency of algorithms that don’t depend on machine-
specific constants and don’t require algorithms to be implemented and
time taken by programs to be compared. Asymptotic notations are
mathematical tools to represent the time complexity of algorithms for
asymptotic analysis.
Asymptotic Notations:
Asymptotic Notations are mathematical tools used to analyze the
performance of algorithms by understanding how their efficiency
changes as the input size grows.
These notations provide a concise way to express the behavior of an
algorithm’s time or space complexity as the input size approaches
infinity.
Rather than comparing algorithms directly, asymptotic analysis focuses
on understanding the relative growth rates of algorithms’ complexities.
It enables comparisons of algorithms’ efficiency by abstracting away
machine-specific constants and implementation details, focusing
instead on fundamental trends.
Asymptotic analysis allows for the comparison of algorithms’ space and
time complexities by examining their performance characteristics as
the input size varies.
By using asymptotic notations, such as Big O, Big Omega, and Big
Theta, we can categorize algorithms based on their worst-case, best-
case, or average-case time or space complexities, providing valuable
insights into their efficiency.
There are mainly three asymptotic notations:
1. Big-O Notation (O-notation)
2. Omega Notation (Ω-notation)
3. Theta Notation (Θ-notation)
1. Theta Notation (Θ-Notation):
Theta notation encloses the function from above and below. Since it
represents the upper and the lower bound of the running time of an
algorithm, it is used for analyzing the average-case complexity of an
algorithm.
.Theta (Average Case) You add the running times for each possible input
combination and take the average in the average case.
Let g and f be the function from the set of natural numbers to itself. The
function f is said to be Θ(g), if there are constants c1, c2 > 0 and a
natural number n0 such that c1* g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0
Theta notation
Mathematical Representation of Theta notation:
Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0
≤ c1 * g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0}
Note: Θ(g) is a set
The above expression can be described as if f(n) is theta of g(n), then the
value f(n) is always between c1 * g(n) and c2 * g(n) for large values of n
(n ≥ n0). The definition of theta also requires that f(n) must be non-
negative for values of n greater than n0.
The execution time serves as both a lower and upper bound on
the algorithm’s time complexity.
It exist as both, most, and least boundaries for a given input
value.
A simple way to get the Theta notation of an expression is to drop low-
order terms and ignore leading constants. For example, Consider the
expression 3n3 + 6n2 + 6000 = Θ(n3), the dropping lower order terms
is always fine because there will always be a number(n) after which Θ(n 3)
has higher values than Θ(n2) irrespective of the constants involved. For a
given function g(n), we denote Θ(g(n)) is following set of functions.
Examples :
{ 100 , log (2000) , 10^4 } belongs to Θ(1)
{ (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Θ(n)
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Θ( n2)
Note: Θ provides exact bounds.
2. Big-O Notation (O-notation):
Big-O notation represents the upper bound of the running time of an
algorithm. Therefore, it gives the worst-case complexity of an algorithm.
.It is the most widely used notation for Asymptotic analysis.
.It specifies the upper bound of a function.
.The maximum time required by an algorithm or the worst-case time
complexity.
.It returns the highest possible output value(big-O) for a given input.
.Big-O(Worst Case) It is defined as the condition that allows an algorithm
to complete statement execution in the longest amount of time possible.
If f(n) describes the running time of an algorithm, f(n) is O(g(n)) if there
exist a positive constant C and n0 such that, 0 ≤ f(n) ≤ cg(n) for all n ≥
n0
It returns the highest possible output value (big-O)for a given
input.
The execution time serves as an upper bound on the algorithm’s
time complexity.
Mathematical Representation of Big-O Notation:
O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n)
≤ cg(n) for all n ≥ n0 }
For example, Consider the case of Insertion Sort. It takes linear time in
the best case and quadratic time in the worst case. We can safely say
that the time complexity of the Insertion sort is O(n 2).
Note: O(n2) also covers linear time.
If we use Θ notation to represent the time complexity of Insertion sort, we
have to use two statements for best and worst cases:
The worst-case time complexity of Insertion Sort is Θ(n 2).
The best case time complexity of Insertion Sort is Θ(n).
The Big-O notation is useful when we only have an upper bound on the
time complexity of an algorithm. Many times we easily find an upper
bound by simply looking at the algorithm.
Examples :
{ 100 , log (2000) , 10^4 } belongs to O(1)
U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to O(n)
U { (n^2+n) , (2n^2) , (n^2+log(n))} belongs to O( n^2)
Note: Here, U represents union, we can write it in these manner
because O provides exact or upper bounds .
3. Omega Notation (Ω-Notation):
Omega notation represents the lower bound of the running time of an
algorithm. Thus, it provides the best case complexity of an algorithm.
The execution time serves as a lower bound on the algorithm’s
time complexity.
It is defined as the condition that allows an algorithm to complete
statement execution in the shortest amount of time.
Let g and f be the function from the set of natural numbers to itself. The
function f is said to be Ω(g), if there is a constant c > 0 and a natural
number n0 such that c*g(n) ≤ f(n) for all n ≥ n0
Mathematical Representation of Omega notation :
Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤
cg(n) ≤ f(n) for all n ≥ n0 }
Let us consider the same Insertion sort example here. The time
complexity of Insertion Sort can be written as Ω(n), but it is not very
useful information about insertion sort, as we are generally interested in
worst-case and sometimes in the average case.
Examples :
{ (n^2+n) , (2n^2) , (n^2+log(n))} belongs to Ω( n^2)
U { (n/4) , (2n+3) , (n/100 + log(n)) } belongs to Ω(n)
U { 100 , log (2000) , 10^4 } belongs to Ω(1)
Note: Here, U represents union, we can write it in these manner
because Ω provides exact or lower bounds.
Properties of Asymptotic Notations:
1. General Properties:
If f(n) is O(g(n)) then a*f(n) is also O(g(n)), where a is a constant.
Example:
f(n) = 2n²+5 is O(n²)
then, 7*f(n) = 7(2n²+5) = 14n²+35 is also O(n²).
Similarly, this property satisfies both Θ and Ω notation.
We can say,
If f(n) is Θ(g(n)) then a*f(n) is also Θ(g(n)), where a is a constant.
If f(n) is Ω (g(n)) then a*f(n) is also Ω (g(n)), where a is a constant.
2. Transitive Properties:
If f(n) is O(g(n)) and g(n) is O(h(n)) then f(n) = O(h(n)).
Example:
If f(n) = n, g(n) = n² and h(n)=n³
n is O(n²) and n² is O(n³) then, n is O(n³)
Similarly, this property satisfies both Θ and Ω notation.
We can say,
If f(n) is Θ(g(n)) and g(n) is Θ(h(n)) then f(n) = Θ(h(n)) .
If f(n) is Ω (g(n)) and g(n) is Ω (h(n)) then f(n) = Ω (h(n))
3. Reflexive Properties:
Reflexive properties are always easy to understand after transitive.
If f(n) is given then f(n) is O(f(n)). Since MAXIMUM VALUE OF f(n) will be
f(n) ITSELF!
Hence x = f(n) and y = O(f(n) tie themselves in reflexive relation always.
Example:
f(n) = n² ; O(n²) i.e O(f(n))
Similarly, this property satisfies both Θ and Ω notation.
We can say that,
If f(n) is given then f(n) is Θ(f(n)).
If f(n) is given then f(n) is Ω (f(n)).
4. Symmetric Properties:
If f(n) is Θ(g(n)) then g(n) is Θ(f(n)).
Example:
If(n) = n² and g(n) = n²
then, f(n) = Θ(n²) and g(n) = Θ(n²)
This property only satisfies for Θ notation.
5. Transpose Symmetric Properties:
If f(n) is O(g(n)) then g(n) is Ω (f(n)).
Example:
If(n) = n , g(n) = n²
then n is O(n²) and n² is Ω (n)
This property only satisfies O and Ω notations.
6. Some More Properties:
1. If f(n) = O(g(n)) and f(n) = Ω(g(n)) then f(n) = Θ(g(n))
2. If f(n) = O(g(n)) and d(n)=O(e(n)) then f(n) + d(n) =
O( max( g(n), e(n) ))
Example:
f(n) = n i.e O(n)
d(n) = n² i.e O(n²)
then f(n) + d(n) = n + n² i.e O(n²)
3. If f(n)=O(g(n)) and d(n)=O(e(n)) then f(n) * d(n) = O( g(n) * e(n))
Example:
f(n) = n i.e O(n)
d(n) = n² i.e O(n²)
then f(n) * d(n) = n * n² = n³ i.e O(n³)
__________________________________________________________________________
_____
Note: If f(n) = O(g(n)) then g(n) = Ω(f(n))