0% found this document useful (0 votes)
10 views

UNIT-1

Uploaded by

ajithtech21600
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views

UNIT-1

Uploaded by

ajithtech21600
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

UNIT-1

Syllabus
Introduction
• Introduction to data structures; Introduction
to Algorithms Complexity
Data Structure
• A data structure is a storage that is used to store
and organize data.
• It is a way of arranging data on a computer so
that it can be accessed and updated efficiently.
• A data structure is not only used for organizing
the data. It is also used for processing, retrieving,
and storing data.
• It is a set of algorithms that we can use in any
programming language to structure the data in
the memory.
Types of Data Structures

There are two types of data structures:


• Primitive data structure
• Non-primitive data structure
Types of Data Structures

Primitive Data structure


• The primitive data structures are primitive data
types. The int, char, float, double, and pointer are
the primitive data structures that can hold a
single value.
Non-Primitive Data structure
• The non-primitive data structure is divided into
two types:
• Linear data structure
• Non-linear data structure
Linear data structure
• Linear data structure: Data structure in which data elements are
arranged sequentially or linearly, where each element is attached to
its previous and next adjacent elements, is called a linear data
structure.

Examples of linear data structures are array, stack, queue, linked
list, etc.
– Static data structure: Static data structure has a fixed memory size. It
is easier to access the elements in a static data structure.
An example of this data structure is an array.
– Dynamic data structure: In dynamic data structure, the size is not
fixed. It can be randomly updated during the runtime which may be
considered efficient concerning the memory (space) complexity of the
code.

Examples of this data structure are queue, stack, etc.


Non-linear data structure
• Non-linear data structure: Data structures
where data elements are not placed
sequentially or linearly are called non-linear
data structures. In a non-linear data structure,
we can’t traverse all the elements in a single
run only.

Examples of non-linear data structures are


trees and graphs.
s
Data structures can also be classified as:

• Static data structure: It is a type of data


structure where the size is allocated at the
compile time. Therefore, the maximum size is
fixed.
• Dynamic data structure: It is a type of data
structure where the size is allocated at the run
time. Therefore, the maximum size is flexible.
Major Operations

The major or the common operations that can be


performed on the data structures are:
• Searching: We can search for any element in a data
structure.
• Sorting: We can sort the elements of a data structure
either in an ascending or descending order.
• Insertion: We can also insert the new element in a data
structure.
• Updation: We can also update the element, i.e., we
can replace the element with another element.
• Deletion: We can also perform the delete operation to
remove the element from the data structure.
• To structure the data in memory, 'n' number of algorithms were proposed,
and all these algorithms are known as Abstract data types. These abstract
data types are the set of rules.
Which Data Structure?

• A data structure is a way of organizing the


data so that it can be used efficiently.
• Here, we have used the word efficiently, which
in terms of both the space and time.
• For example, a stack is an ADT (Abstract data
type) which uses either arrays or linked list
data structure for the implementation.
• Therefore, we conclude that we require some
data structure to implement a particular ADT.
Note
• An ADT tells what is to be done and data
structure tells how it is to be done. In other
words, we can say that ADT gives us the
blueprint while data structure provides the
implementation part.
• How can one get to know which data structure
to be used for a particular ADT?.
Answer
• As the different data structures can be
implemented in a particular ADT, but the
different implementations are compared for time
and space.
• For example, the Stack ADT can be implemented
by both Arrays and linked list. Suppose the array
is providing time efficiency while the linked list is
providing space efficiency, so the one which is the
best suited for the current user's requirements
will be selected.
Advantages of Data structures

• The following are the advantages of a data structure:


• Efficiency: If the choice of a data structure for
implementing a particular ADT is proper, it makes the
program very efficient in terms of time and space.
• Reusability: The data structure provides reusability
means that multiple client programs can use the data
structure.
• Abstraction: The data structure specified by an ADT
also provides the level of abstraction. The client cannot
see the internal working of the data structure, so it
does not have to worry about the implementation part.
The client can only see the interface.
Algorithms

• The word Algorithm means ” A set of finite rules or


instructions to be followed in calculations or other problem-
solving operations ” Or ” A procedure for solving a
mathematical problem in a finite number of steps that
frequently involves recursive operations”.

• Algorithm refers to a sequence of finite steps to solve a


particular problem.

18
Algorithmic complexity
• Algorithmic complexity is a measure of how long an
algorithm would take to complete given an input of
size n.
• If an algorithm has to scale, it should compute the
result within a finite and practical time bound even for
large values of n. For this reason, complexity is
calculated asymptotically as n approaches infinity.
• Complexity is usually computed in terms of time and
space requirements.
• Analysis of an algorithm's complexity is helpful when
comparing algorithms or seeking improvements.
How to analyze an Algorithm?

• For a standard algorithm to be good, it must be efficient. Hence the efficiency of


an algorithm must be checked and maintained. It can be in two stages:
• Priori Analysis: “Priori” means “before”. Hence Priori analysis means checking the
algorithm before its implementation. In this, the algorithm is checked when it is
written in the form of theoretical steps. This Efficiency of an algorithm is measured
by assuming that all other factors, for example, processor speed, are constant and
have no effect on the implementation. This is done usually by the algorithm
designer. This analysis is independent of the type of hardware and language of the
compiler. It gives the approximate answers for the complexity of the program.
• Posterior Analysis: “Posterior” means “after”. Hence Posterior analysis means
checking the algorithm after its implementation. In this, the algorithm is checked
by implementing it in any programming language and executing it. This analysis
helps to get the actual and real analysis report about correctness(for every
possible input/s if it shows/returns correct output or not), space required, time
consumed etc. That is, it is dependent on the language of the compiler and the
type of hardware used.
Complexity of an Algorithm
Time Complexity of an Algorithm
• The time complexity is defined as the process of determining a
formula for total time required towards the execution of that
algorithm. This calculation is totally independent of implementation
and programming language.
• Time is measured by counting the number of key operations such as
comparisons in the sorting algorithm.
Space Complexity of an Algorithm
• Space complexity is defining as the process of defining a formula for
prediction of how much memory space is required for the
successful execution of the algorithm. The memory space is
generally considered as the primary memory.
• Space is measured by counting the maximum memory space
required by the algorithm to run/execute.
How to calculate Space Complexity?

The space complexity of an algorithm is calculated by determining


the following 2 components:

• Fixed Part: This refers to the space that is definitely required by the
algorithm. For example, input variables, output variables, program
size, etc.
• Variable Part: This refers to the space that can be different based
on the implementation of the algorithm. For example, temporary
variables, dynamic memory allocation, recursion stack space, etc.
Therefore Space complexity S(P) of any algorithm P is S(P) = C +
SP(I), where C is the fixed part and S(I) is the variable part of the
algorithm, which depends on instance characteristic I.
How to calculate Time Complexity?

The time complexity of an algorithm is also calculated by
determining the following 2 components:
• Constant time part: Any instruction that is executed just
once comes in this part. For example, input, output, if-else,
switch, arithmetic operations etc.
• Variable Time Part: Any instruction that is executed more
than once, say n times, comes in this part. For example,
loops, recursion, etc.
Therefore Time complexity of any algorithm P is T(P) = C +
TP(I), where C is the constant time part and TP(I) is the
variable part of the algorithm, which depends on the
instance characteristic I.
Real-world examples of various
algorithmic complexities?
• Suppose you're looking for a specific item in a long unsorted list, you'll
probably compare with each item. Search time is proportional to the list
size. Here complexity is said to be linear.
• On the other hand, if you search for a word in a dictionary, the search will
be faster because the words are in sorted order, you know the order and
can quickly decide if you need to turn to earlier pages or later pages. This
is an example of logarithmic complexity.
• If you're asked to pick out the first word in a dictionary, this operation is
of constant time complexity, regardless of number of words in the
dictionary. Likewise, joining the end of a queue in a bank is of constant
complexity regardless of how long the queue is.
• Suppose you are given an unsorted list and asked to find all duplicates,
then the complexity becomes quadratic. Checking for duplicates for one
item is of linear complexity. If we do this for all items, complexity becomes
quadratic. Similarly, if all people in a room are asked to shake hands with
every other person, the complexity is quadratic
What does it mean to state best-case, worst-case and
average time complexity of algorithms?

• Let's take the example of searching for an item


sequentially within a list of unsorted items. If we're
lucky, the item may occur at the start of the list. If
we're unlucky, it may be the last item in the list.
• The former is called best-case complexity and the latter
is called worst-case complexity. If the searched item is
always the first one, then complexity is (O(1)); if it's
always the last one, then complexity is (O(n)). We can
also calculate the average complexity, which will turn
out to be (O(n)).
Asymptotic Notations:

• Asymptotic Notations are programming languages that


allow you to analyze an algorithm’s running time by
identifying its behavior as its input size grows.
• This is also referred to as an algorithm’s growth rate.
• You can’t compare two algorithm’s head to head.
• You compare space and time complexity using
asymptotic analysis.
• It compares two algorithms based on changes in their
performance as the input size is increased or
decreased.
Asymptotic Notations:

• Following are the commonly used asymptotic


notations to calculate the running time
complexity of an algorithm.
• Ο − Big Oh Notation
• Ω − Big Omega Notation
• Θ − Theta Notation
• o − Li le Oh Nota on
• ω − Little Omega Notation
Big-O Notation (O-notation):

• Big-O notation- is the prevalent notation to represent


algorithmic complexity. It gives an upper bound on
complexity and hence it signifies the worst-case
performance of the algorithm.
• With such a notation, it's easy to compare different
algorithms because the notation tells clearly how the
algorithm scales when input size increases. This is often
called the order of growth.
• The notation Ο(n) is the formal way to express the upper
bound of an algorithm's running time. It measures
the worst case time complexity or the longest amount of
time an algorithm can possibly take to complete.
Big-O Notation (O-notation):

• Big-O notation represents the upper bound of the


running time of an algorithm. Therefore, it gives the
worst-case complexity of an algorithm.
• .It is the most widely used notation for Asymptotic
analysis.
.It specifies the upper bound of a function.
.The maximum time required by an algorithm or the
worst-case time complexity.
.It returns the highest possible output value(big-O) for
a given input.
.Big-Oh(Worst Case) It is defined as the condition that
allows an algorithm to complete statement execution
in the shortest amount of time possible.
Big-O Notation (O-notation):

• If f(n) describes the running time of an


algorithm, f(n) is O(g(n)) if there exist a
positive constant C and n0 such that, 0 ≤ f(n) ≤
cg(n) for all n ≥ n0
• It returns the highest possible output value
(big-O)for a given input.
• The execution time serves as an upper bound
on the algorithm’s time complexity.
Big-O Notation (O-notation):

Mathematical Representation of Big-O Notation:


O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤
cg(n) for all n ≥ n0 }

Ο(f(n)) = { g(n) : there exists c > 0 and n0 such that g(n) ≤ c.f(n) for all n >
n0. }
Big-Omega Notation (Ω-Notation):

• Omega notation represents the lower bound of the running


time of an algorithm. Thus, it provides the best case
complexity of an algorithm.
• The execution time serves as a lower bound on the
algorithm’s time complexity.
• It is defined as the condition that allows an algorithm to
complete statement execution in the shortest amount of
time.
• The notation Ω(n) is the formal way to express the lower
bound of an algorithm's running time. It measures the best
case time complexity or the best amount of time an
algorithm can possibly take to complete.
Big-Omega Notation (Ω-Notation):

Let g and f be the function from the set of natural numbers to itself. The
function f is said to be Ω(g), if there is a constant c > 0 and a natural
number n0 such that c*g(n) ≤ f(n) for all n ≥ n0
Mathematical Representation of
Omega notation :
• Ω(g(n)) = { f(n): there exist positive constants c
and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }
• Ω(f(n)) ≥ { g(n) : there exists c > 0 and n0 such
that g(n) ≤ c.f(n) for all n > n0. }
Theta Notation (Θ-Notation):

• Theta notation encloses the function from


above and below. Since it represents the upper
and the lower bound of the running time of an
algorithm, it is used for analyzing
the average-case complexity of an algorithm.
• .Theta (Average Case) You add the running
times for each possible input combination and
take the average in the average case.
Theta Notation (Θ-Notation):

Mathematical Representation of Theta notation:


Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1
* g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0}
Note: Θ(g) is a set
Little Oh (o) and Little Omega (ω)
Notations
• The Little Oh and Little Omega notations also represent the best and worst case
complexities but they are not asymptotically tight in contrast to the Big Oh and Big
Omega Notations.
• Thus, little o() means loose upper-bound of f(n). Little o is a rough estimate of the
maximum order of growth whereas Big-Ο may be the actual order of growth.
• f(n) has a higher growth rate than g(n) so main difference between Big Omega (Ω)
and little omega (ω) lies in their definitions.In the case of Big Omega f(n)=Ω(g(n))
and the bound is 0<=cg(n)<=f(n), but in case of little omega, it is true for
0<=c*g(n)<f(n).
• The relationship between Big Omega (Ω) and Little Omega (ω) is similar to that of
Big-Ο and Little o except that now we are looking at the lower bounds. Little
Omega (ω) is a rough estimate of the order of the growth whereas Big Omega (Ω)
may represent exact order of growth. We use ω notation to denote a lower bound
that is not asymptotically tight. And, f(n) ω(g(n)) if and only if g(n) ο((f(n)).

• https://www.geeksforgeeks.org/analysis-of-algorithems-little-o-and-little-omega-
notations/
Common Asymptotic Notations
Example

Let us consider a given function, f(n) =


4.n3+10.n2+5.n+1.
Considering g(n) = n3
f(n) ≥ 5.g(n) for all the values of n > 2.
Hence, the complexity of f(n) can be
represented as O (g (n) ) ,i.e. O (n3).
Examples :

• { 100 , log (2000) , 10^4 } belongs to O(1)


• { (n/4) , (2n+3) , (n/100 + log(n)) } belongs
to O(n)
• { (n^2+n) , (2n^2) , (n^2+log(n))} belongs to O(
n^2)
• O provides exact or upper bounds .

You might also like