0% found this document useful (0 votes)
4 views

Chapter 9 - Complexity Analysis

Complexity Analysis is a technique to evaluate the time and space resources required by algorithms relative to input size, aiding in the comparison of different algorithms. Asymptotic Notations, including Big O, Big Omega, and Big Theta, are used to express the efficiency of algorithms in terms of their time and space complexities. The document also discusses various complexity classes such as P, NP, NP-hard, and NP-complete, outlining their characteristics and significance in algorithm analysis.

Uploaded by

danielatparoni
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

Chapter 9 - Complexity Analysis

Complexity Analysis is a technique to evaluate the time and space resources required by algorithms relative to input size, aiding in the comparison of different algorithms. Asymptotic Notations, including Big O, Big Omega, and Big Theta, are used to express the efficiency of algorithms in terms of their time and space complexities. The document also discusses various complexity classes such as P, NP, NP-hard, and NP-complete, outlining their characteristics and significance in algorithm analysis.

Uploaded by

danielatparoni
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 60

Search . . .

Complexity Analysis
CHAPTER 9
Search . . .

What is Complexity Analysis?

Complexity Analysis is defined as a technique to


characterize the time taken by an algorithm with respect to
input size (independent from the machine, language and
compiler). It is used for evaluating the variations of execution
time on different algorithms.
Search . . .

What is the need for Complexity


Analysis?
• Complexity Analysis determines the amount of time and
space resources required to execute it.
• It is used for comparing different algorithms on different
input sizes.
• Complexity helps to determine the difficulty of a problem.
• Often measured by how much time and space (memory) it
takes to solve a particular problem
Search . . .

What is the need for Complexity


Analysis?
• Complexity Analysis determines the amount of time and
space resources required to execute it.
• It is used for comparing different algorithms on different
input sizes.
• Complexity helps to determine the difficulty of a problem.
• Often measured by how much time and space (memory) it
takes to solve a particular problem
Search . . .

Asymptotic Notations
Search . . .

What is Asymptotic Notations?


Asymptotic Notations are mathematical tools used to analyze
the performance of algorithms by understanding how their
efficiency changes as the input size grows.

These notations provide a concise way to express the


behavior of an algorithm’s time or space complexity as the
input size approaches infinity.
Search . . .

What is Asymptotic Notations?


Theta notation encloses the function from above and below.
Since it represents the upper and the lower bound of the
running time of an algorithm, it is used for analyzing the
average-case complexity of an algorithm.

Theta (Average Case) You add the running times for each
possible input combination and take the average in the
average case.
Search . . .

What is Asymptotic Notations?


Asymptotic analysis allows for the comparison of algorithms’
space and time complexities by examining their performance
characteristics as the input size varies.

By using asymptotic notations, such as Big O, Big Omega,


and Big Theta, we can categorize algorithms based on their
worst-case, best-case, or average-case time or space
complexities, providing valuable insights into their efficiency.
Search . . .

3 Asymptotic Notations

1. Big-O Notation (O-notation)

2. Omega Notation (Ω-notation)

3. Theta Notation (Θ-notation)


Search . . .

1. Big Theta Notation (Θ-Notation):

Theta notation encloses the function from above and below.


Since it represents the upper and the lower bound of the
running time of an algorithm, it is used for analyzing the
average-case complexity of an algorithm.
Search . . .

1. Big Theta Notation (Θ-Notation):

Big Theta provides a tight bound on the growth rate, meaning it


describes an algorithm’s exact asymptotic behavior. It indicates
that the function grows at the same rate as a given function 𝑓(𝑛)f
in both the upper and lower bounds.
Search . . .

1. Big Theta Notation (Θ-Notation):


Example 1: Linear Search
Search . . .

2. Big-O Notation (O-notation)

Big-O notation is a mathematical notation used to describe


the efficiency of an algorithm, particularly its time and space
complexity, as the size of the input (usually denoted as 𝑛n)
grows. It helps evaluate how an algorithm's runtime or
memory usage scales with input size, offering insight into its
performance.
Search . . .

2. Big-O Notations:
Big-O notation represents the upper bound of the running
time of an algorithm. Therefore, it gives the worst-case
complexity of an algorithm.

For example, Consider the case of Insertion Sort. It takes linear


time in the best case and quadratic time in the worst case. We
can safely say that the time complexity of the Insertion sort is
O(n2).

Note: O(n2) also covers linear time.


Search . . .

Common Big-O Notations:


O(1): Constant time - the algorithm takes the same time
regardless of input size.
Search . . .

Common Big-O Notations:


O(log n): Logarithmic time - grows slowly as input size
increases.
Search . . .

Common Big-O Notations:


O(n): Linear time - directly proportional to input size.
Search . . .

Common Big-O Notations:


O(n log n): Linearithmic time - grows faster than linear but
slower than quadratic.
Search . . .

Common Big-O Notations:


O(n^2): Quadratic time - growth rate squares as input size
grows.
Search . . .

Common Big-O Notations:


O(2^n): Exponential time - grows very quickly, doubling with
each addition to input size.
Search . . .
Search . . .

3. Big Omega Notation (Ω-Notation):


Omega notation represents the lower bound of the
running time of an algorithm. Thus, it provides the
best case complexity of an algorithm.

The execution time serves as a lower bound on the


algorithm’s time complexity.
Search . . .

3. Big Omega Notation (Ω-Notation):

It is defined as the condition that allows an algorithm to


complete statement execution in the shortest amount of
time.

Just like Onotation provide an asymptotic upper bound,


Ωnotation provides asymptotic lower bound.
Search . . .
Search . . .

Summary
Search . . .

Memory Complexity
Search . . .

What is Memory Complexity:


Memory complexity, also known as space complexity, refers to
the amount of memory an algorithm needs to execute relative
to the size of its input.

Just like time complexity measures the time required by an


algorithm, memory complexity focuses on the resources
required in terms of storage or memory. Efficient memory usage
is crucial, especially in applications with limited memory
resources or for large datasets.
Search . . .

What is Memory Complexity:

Memory complexity measures the total memory an algorithm


uses, which includes:

Input space: The space required to store input data.

Auxiliary space: Extra space or temporary storage that the


algorithm needs during execution.
Search . . .

Units of Measurement:
Memory is typically measured in terms of the number of
variables or the size of the data structures used (e.g., arrays, lists).

For large inputs, memory complexity is often expressed


asymptotically, similar to time complexity (using Big O notation
like 𝑂(𝑛), 𝑂(𝑛2) etc.).
Search . . .

Examples of Memory Complexity:


Constant Space Complexity O(1): Algorithms that use a fixed
amount of memory regardless of input size (e.g., simple iterative
algorithms that use a few variables).

Linear Space Complexity O(n): Algorithms that require


additional space proportional to the input size, such as storing
an array of size 𝑛n.Quadratic Space

Complexity O(n 2 ): Common in algorithms that use two-


dimensional arrays (e.g., storing data in a matrix).
Search . . .

Calculation of Memory Complexity:

➢ When analyzing an algorithm, consider the memory required


by data structures, recursive calls, and any auxiliary variables.

➢ Memory complexity often excludes input data size, focusing


on the additional or auxiliary space an algorithm requires
beyond its input.
Search . . .

Trade-offs Between Time and Memory


Complexity:
➢ Often, optimizing for memory may increase time complexity,
and vice versa. For example, some dynamic programming
algorithms use additional memory to store computed results
(memoization), reducing time complexity at the expense of
memory.

➢ When designing an algorithm, balancing time and memory


usage depends on the problem constraints and the
environment where the code will run.
Search . . .

Why Memory Complexity Matters:


Efficient memory use is critical in:

➢ Embedded Systems: Where memory is limited.


➢ Big Data Applications: Where excessive memory use can lead to
slowdowns or even system failures.
➢ Mobile Applications: Where optimizing memory helps conserve
battery life and improve performance.
Search . . .

Worst, Average and Best Case Analysis


of Algorithms
Search . . .

Popular Notations in Complexity Analysis


of Algorithms
1. Big-O Notation

We define an algorithm’s worst-case time complexity by using the Big-


O notation, which determines the set of functions grows slower than
or at the same rate as the expression. Furthermore, it explains the
maximum amount of time an algorithm requires to consider all input
values.
Search . . .

Popular Notations in Complexity Analysis


of Algorithms
2. Omega Notation

It defines the best case of an algorithm’s time complexity, the Omega


notation defines whether the set of functions will grow faster or at the
same rate as the expression. Furthermore, it explains the minimum
amount of time an algorithm requires to consider all input values.
Search . . .

Popular Notations in Complexity Analysis


of Algorithms
3. Theta Notation

It defines the average case of an algorithm’s time complexity, the


Theta notation defines when the set of functions lies in both
O(expression) and Omega(expression), then Theta notation is used.
This is how we define a time complexity average case for an algorithm.
Search . . .

Popular Notations in Complexity Analysis


of Algorithms
3. Theta Notation

It defines the average case of an algorithm’s time complexity, the


Theta notation defines when the set of functions lies in both
O(expression) and Omega(expression), then Theta notation is used.
This is how we define a time complexity average case for an algorithm.
Search . . .

Important points:

➢ The worst case analysis of an algorithm provides an upper bound


on the running time of the algorithm for any input size.

➢ The average case analysis of an algorithm provides an estimate of


the running time of the algorithm for a random input.

➢ The best case analysis of an algorithm provides a lower bound on


the running time of the algorithm for any input size.
Search . . .

Important points:

➢ The big O notation is commonly used to express the worst case


running time of an algorithm.

➢ Different algorithms may have different best, average, and worst


case running times.
Search . . .

Types of Complexity Classes

1. P Class
2. NP Class
3. CoNP Class
4. NP-hard
5. NP-complete
Search . . .

1. P Class

➢ The P in the P class stands for Polynomial Time. It is the


collection of decision problems(problems with a “yes” or
“no” answer) that can be solved by a deterministic
machine in polynomial time.
Search . . .

1. P Class
Features:

➢ The solution to P problems is easy to find.

➢ P is often a class of computational problems that are


solvable and tractable. Tractable means that the problems
can be solved in theory as well as in practice. But the
problems that can be solved in theory but not in practice
are known as intractable.
Search . . .

1. P Class

This class contains many problems:

1. Calculating the greatest common divisor.


2. Finding a maximum matching.
3. Merge Sort
Search . . .

2. NP Class

The NP in NP class stands for Non-


deterministic Polynomial Time. It is the collection
of decision problems that can be solved by a non-
deterministic machine in polynomial time.
Search . . .

2. NP Class

Features:

➢ The solutions of the NP class are hard to find


since they are being solved by a non-
deterministic machine but the solutions are
easy to verify.

➢ Problems of NP can be verified by a Turing


machine in polynomial time.
2. NP Class
Search . . .

Example:

Let us consider an example to better understand


the NP class. Suppose there is a company having a total
of 1000 employees having unique employee IDs.
Assume that there are 200 rooms available for them. A
selection of 200 employees must be paired together, but
the CEO of the company has the data of some
employees who can’t work in the same room due to
personal reasons.
2. NP Class
Search . . .

It indicates that if someone can provide us with the


solution to the problem, we can find the correct and
incorrect pair in polynomial time. Thus for the NP class
problem, the answer is possible, which can be calculated
in polynomial time.
2. NP Class
Search . . .

This class contains many problems that one would


like to be able to solve effectively:

1. Boolean Satisfiability Problem (SAT).


2. Hamiltonian Path Problem.
3. Graph coloring.
Search . . .

3. Co-NP Class

Co-NP stands for the complement of NP Class. It


means if the answer to a problem in Co-NP is No, then
there is proof that can be checked in polynomial time.
Search . . .

3. Co-NP Class
Features:

➢ If a problem X is in NP, then its complement X’ is


also in CoNP.

➢ For an NP and CoNP problem, there is no need to


verify all the answers at once in polynomial time,
there is a need to verify only one particular answer
“yes” or “no” in polynomial time for a problem to be
in NP or CoNP.
Search . . .

3. Co-NP Class

Some example problems for CoNP are:

1. To check prime number.


2. Integer Factorization.
Search . . .

4. NP-hard class

An NP-hard problem is at least as hard as the hardest


problem in NP and it is a class of problems such that
every problem in NP reduces to NP-hard.
Search . . .

4. NP-hard class

Features:

➢ All NP-hard problems are not in NP.


➢ It takes a long time to check them. This means if a
solution for an NP-hard problem is given then it
takes a long time to check whether it is right or not.
➢ A problem A is in NP-hard if, for every problem L in
NP, there exists a polynomial-time reduction from L
to A.
Search . . .

4. NP-hard class

Some of the examples of problems in Np-hard are:

1. Halting problem.
2. Qualified Boolean formulas.
3. No Hamiltonian cycle.
Search . . .

5. NP-complete class

A problem is NP-complete if it is both NP and NP-


hard. NP-complete problems are the hard problems in
NP.
Search . . .

5. NP-complete class

Features:

➢ NP-complete problems are special as any problem


in NP class can be transformed or reduced into NP-
complete problems in polynomial time.
➢ If one could solve an NP-complete problem in
polynomial time, then one could also solve any NP
problem in polynomial time.
Search . . .

5. NP-complete class

Some example problems include:

➢ Hamiltonian Cycle.
➢ Satisfiability.
➢ Vertex cover.
Search . . .

Summary:

Complexity Class Characteristic feature


P Easily solvable in polynomial time.
Yes, answers can be checked in
NP
polynomial time.
No, answers can be checked in
Co-NP
polynomial time.
All NP-hard problems are not in NP
NP-hard and it takes a long time to check
them.
A problem that is NP and NP-hard is
NP-complete
NP-complete.
Search . . .

Reference:
https://www.geeksforgeeks.org/complete-guide-on-complexity-analysis/

You might also like