How Data Structure Differs/varies From Data Type
How Data Structure Differs/varies From Data Type
Data structures are used for the arrangement of data in memory. They are
needed and responsible for organizing, processing, accessing, and storing data
efficiently.
It can hold value but not data. It can hold multiple types of data within
Therefore, it is dataless. a single object.
Data type examples are int, float, Data structure examples are stack,
double, etc. queue, tree, etc.
3.Explain Classification of Data Structure:
Integers
Float
Strings
Boolean
In the next sections, you'll learn more about them!
Integers
You can use an integer to represent numeric data and, more specifically, whole numbers from negative
infinity to infinity, like 4, 5, or -1.
Float
"Float" stands for 'floating point number'. You can use it for rational numbers, usually ending with a
decimal figure, such as 1.11 or 3.14.
13
String
Strings are collections of alphabets, words, or other characters. In Python, you can create strings by
enclosing a sequence of characters within a pair of single or double quotes. For
example: 'cake', "cookie", etc.
You can also apply the + operations on two or more strings to concatenate them, just like in the example
below:
x = 'Cake'
y = 'Cookie'
x+'&'+y
x*2
'CakeCake'
You can also slice strings, which means that you select parts of strings:
# Range Slicing
x = 'Cake'
z1 = x[2:]
print(z1)
Ke
# Slicing
y = 'Cookie'
z2 = y[0] + y[1]
print(z2)
ke
Co
Note that strings can also be alpha-numeric characters, but that the + operation still is used to concatenate
strings.
x = '4'
y = '2'
x+y
'42'
Python has many built-in methods or helper functions to manipulate strings. Replacing a substring,
capitalising certain words in a paragraph, finding the position of a string within another string are some
common string manipulations. Check out some of these:
Capitalize strings
str.capitalize('cookie')
'Cookie'
Retrieve the length of a string in characters. Note that the spaces also count towards
the final result:
str1 = "Cake 4 U"
str2 = "404"
len(str1)
8
False
str2.isdigit()
True
'Cake 404'
Find substrings in other strings; Returns the lowest index or position within the string
at which the substring is found:
str1 = 'cookie'
str2 = 'cook'
str1.find(str2)
The substring
'cook'
is found at the start of
'cookie'
str2 = 'cook'
str1.find(str2)
12
. Remember that you start counting from 0 and that spaces count towards the
positions!
You can find an exhaustive list of string methods in Python here.
Boolean
This built-in data type can take up the values: True and False, which often makes them interchangeable
with the integers 1 and 0. Booleans are useful in conditional and comparison expressions, just like in the
following examples:
x=4
y=2
x == y
False
x>y
True
x=4
y=2
z = (x==y) # Comparison expression (Evaluates to false)
type(i)
float
When you change the type of an entity from one data type to another, this is called "typecasting". There
can be two kinds of data conversions possible: implicit termed as coercion and explicit, often referred to
as casting.
Implicit Data Type Conversion
This is an automatic data conversion, and the compiler handles this for you. Take a look at the following
examples:
# A float
x = 4.0
# An integer
y=2
In the example above, you did not have to explicitly change the data type of y to perform float value
division. The compiler did this for you implicitly.
Explicit Data Type Conversion
This type of data type conversion is user-defined, which means you have to explicitly inform the compiler
to change the data type of certain entities. Consider the code chunk below to fully understand this:
x=2
fav_movie = y + x
<ipython-input-51-b8fe90df9e0e> in <module>()
1x=2
2 y = "The Godfather: Part "
----> 3 fav_movie = y + x
Note that it might not always be possible to convert a data type to another. Some built-in data conversion
functions that you can use here are: int(), float(), and str().
x=2
print(fav_movie)
The Godfather: Part 2
Arrays
Lists
Files
Array
First off, arrays in Python are a compact way of collecting basic data types, all the entries in an array must
be of the same data type. However, arrays are not all that popular in Python, unlike the other
programming languages such as C++ or Java.
In general, when people talk of arrays in Python, they are actually referring to lists. However, there is a
fundamental difference between them, and you will see this in a bit. For Python, arrays can be seen as a
more efficient way of storing a certain kind of list. This type of list has elements of the same data type,
though.
In Python, arrays are supported by the array module and need to be imported before you start initializing
and using them. The elements stored in an array are constrained in their data type. The data type is
specififed during the array creation and specified using a type code, which is a single character like
the I you see in the example below:
type(x)
list
x1 = [1,2,3]
type(x1)
list
x2 = list([1,'apple',3])
type(x2)
list
print(x2[1])
apple
x2[1] = 'orange'
print(x2)
[1, 'orange', 3]
Note: as you have seen in the above example with x1, lists can also hold homogeneous items and hence
satisfy the storage functionality of an array. This is fine unless you want to apply some specific operations
to this collection.
Python provides many methods to manipulate and work with lists. Adding new items to a list, removing
some items from a list, and sorting or reversing a list are common list manipulations. Let's see some of
them in action:
Add 11 to the list_num list with append(). By default, this number will be added to the
end of the list.
list_num = [1,2,45,6,7,2,90,23,435]
list_char = ['c','o','o','k','i','e']
print(list_num)
OpenAI
print(list_num)
[11, 1, 2, 45, 6, 7, 2, 90, 23, 435, 11]
Remove the first occurence of 'o' from list_char with the help of remove()
list_char.remove('o')
print(list_char)
['c', 'o', 'k', 'i', 'e']
print(list_char)
['c', 'o', 'k', 'e']
list_num.sort() # In-place sorting
print(list_num)
OpenAI
print(stack)
[1, 2, 3, 4, 5, 6]
stack.pop() # Bottom -> 1 -> 2 -> 3 -> 4 -> 5 (Top)
print(stack)
[1, 2, 3, 4]
Queue
A queue is a container of objects that are inserted and removed according to the First-In-First-Out (FIFO)
principle. An excellent example of a queue in the real world is the line at a ticket counter where people
are catered to according to their arrival sequence, and hence the person who arrives first is also the first to
leave. Queues can be of many different kinds.
Lists are not efficient for implementing a queue, because append() and pop() from the end of a list is
not fast and incur a memory movement cost. Also, insertion at the end and deletion from the beginning of
a list is not so fast since it requires a shift in the element positions.
Graphs
A graph in mathematics and computer science are networks consisting of nodes, also called vertices
which may or may not be connected to each other. The lines or the path that connects two nodes is called
an edge. If the edge has a particular direction of flow, then it is a directed graph, with the direction edge
being called an arc. Else if no directions are specified, the graph is called an undirected graph.
This may sound all very theoretical and can get rather complex when you dig deeper. However, graphs
are an important concept, especially in data science, and are often used to model real-life problems. Social
networks, molecular studies in chemistry and biology, maps, and recommender systems all rely on graph
and graph theory principles.
Here, you will find a simple graph implementation using a Python Dictionary to help you get started:
graph = { "a" : ["c", "d"],
def define_edges(graph):
edges = []
return edges
print(define_edges(graph))
[('a', 'c'), ('a', 'd'), ('b', 'd'), ('b', 'e'), ('c', 'a'), ('c', 'e'), ('e', 'b'), ('e', 'c'), ('d', 'a'), ('d', 'b')]
You can do some cool stuff with graphs, such as trying to find if there exists a path between two nodes or
finding the shortest path between two nodes, determining cycles in the graph.
The famous "traveling salesman problem" is, in fact, about finding the shortest possible route that visits
every node exactly once and returns to the starting point. Sometimes the nodes or arcs of a graph have
been assigned weights or costs, you can think of this as assigning difficulty level to walk, and you are
interested in finding the cheapest or the easiest path.
Trees
A tree in the real world is a living being with its roots in the ground and the branches that hold the leaves,
and fruit out in the open. The branches of the tree spread out in a somewhat organized way. In computer
science, trees are used to describe how data is sometimes organized, except that the root is on the top and
the branches, leaves follow, spreading towards the bottom and the tree is drawn inverted compared to the
real tree.
To introduce a little more notation, the root is always at the top of the tree. Keeping the tree metaphor, the
other nodes that follow are called the branches with the final node in each branch being called leaves.
You can imagine each branch as being a smaller tree in itself. The root is often called the parent and the
nodes that it refers to below it called its children. The nodes with the same parent are called siblings. Do
you see why this is also called a family tree?
Trees help in defining real world scenarios and are used everywhere from the gaming world to designing
XML parsers and also the PDF design principle is based on trees. In data science, 'Decision Tree based
Learning' actually forms a large area of research. Numerous famous methods exist like bagging, boosting
use the tree model to generate a predictive model. Games like chess build a huge tree with all possible
moves to analyse and apply heuristics to decide on an optimal move.
You can implement a tree structure using and combining the various data structures you have seen so far
in this tutorial. However, for the sake of simplicity, this topic will be tackled in another post.
class Tree:
def __init__(self, info, left=None, right=None):
self.info = info
self.left = left
self.right = right
def __str__(self):
return (str(self.info) + ', Left child: ' + str(self.left) + ', Right child: ' + str(self.right))
print(tree)
1, Left child: 2, Left child: 2.1, Right child: 2.2, Right child: 3, Left child: 3.1, Right child: None
You have learned about arrays and also seen the list data structure. However, Python provides many
different types of data collection mechanisms, and although they might not be included in traditional data
structure topics in computer science, they are worth knowing especially with regards to Python
programming language:
Tuples
Dictionary
Sets
Tuples
Tuples are another standard sequence data type. The difference between tuples and list is that tuples are
immutable, which means once defined you cannot delete, add or edit any values inside it. This might be
useful in situations where you might to pass the control to someone else but you do not want them to
manipulate data in your collection, but rather maybe just see them or perform operations separately in a
copy of the data.
Let's see how tuples are implemented:
x_tuple = 1,2,3,4,5
y_tuple = ('c','a','k','e')
x_tuple[0]
1
y_tuple[3]
---------------------------------------------------------------------------
<ipython-input-74-b5d6da8c1297> in <module>()
1 y_tuple[3]
----> 2 x_tuple[0] = 0 # Cannot change values inside a tuple
del x_dict['Joe']
x_dict
1
You can apply many other inbuilt functionalies on dictionaries:
len(x_dict)
3
x_dict.keys()
This code shows an example of using a Python dictionary to store and access key-value pairs.
First, the code calls the len() function with x_dict as an argument. This returns the number of key-
value pairs in the dictionary, which is 3.
Next, the code calls the keys() method on x_dict. This returns a view object containing the keys in the
dictionary. In this case, the keys are 'Prem', 'Edward', and 'Jorge', as shown by the output.
Then, the code calls the values() method on x_dict. This returns a view object containing the values
in the dictionary. In this case, the values are 3, 1, and 2, respectively, as shown by the output.
Sets
Sets are a collection of distinct (unique) objects. These are useful to create lists that only hold unique
values in the dataset. It is an unordered collection but a mutable one, this is very helpful when going
through a huge dataset.
x_set = set('CAKE&COKE')
y_set = set('COOKIE')
print(x_set)
{'A', '&', 'O', 'E', 'C', 'K'}
OpenAI
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-3-31abf5d98454> in <module>()
----> 1 print(x - y) # All the elements in x_set but not in y_set
The code creates two sets: x_set and y_set. Each set is created by passing a string of characters as an
argument to the set() function. In this case, x_set is created from the string 'CAKE&COKE',
while y_set is created from the string 'COOKIE'.
Next, the code prints each set using the print() function. The first print() statement prints x_set,
which contains the unique characters in the string 'CAKE&COKE'. The output shows
that x_set contains the characters 'A', '&', 'O', 'E', 'C', and 'K'. Similarly, the second print() statement
prints y_set, which contains the unique characters in the string 'COOKIE'. The output shows
that y_set contains the characters 'I', 'O', 'E', 'C', and 'K'.
The third print() statement attempts to print the set of all elements in x_set that are not in y_set.
However, this line of code results in a NameError because the variable x has not been defined.
Presumably, this line should read print(x_set - y_set) instead.
The fourth print() statement prints the set of unique elements in x_set or y_set, or both. The output
shows that the resulting set contains all of the unique characters from both x_set and y_set.
Finally, the fifth print() statement prints the set of elements that are in both x_set and y_set. The
output shows that this set contains the characters 'O', 'E', 'K', and 'C'.
Files
Files are traditionally a part of data structures. And although big data is commonplace in the data science
industry, a programming language without the capability to store and retrieve previously stored
information would hardly be useful. You still have to make use of the all the data sitting in files across
databases and you will learn how to do this.
The syntax to read and write files in Python is similar to other programming languages but a lot easier to
handle. Here are some of the basic functions that will help you to work with files using Python:
open() to open files in your system, the filename is the name of the file to be opened;
read() to read entire files;
readline() to read one line at a time;
write() to write a string to a file, and return the number of characters written; And
close() to close the file.
# File modes (2nd argument): 'r'(read), 'w'(write), 'a'(appending), 'r+'(both reading and writing)
f = open('file_name', 'w')
# Writes the string to the file, returning the number of char written
f.write('Add this line.')
f.close()
The structure of the data and the synthesis of the algorithm are relative to each
other. Data presentation must be easy to understand so the developer, as well
as the user, can make an efficient implementation of the operation.
Data structures provide an easy way of organizing, retrieving, managing, and
storing data.
Here is a list of the needs for data.
Data structure modification is easy.
It requires less time.
Save storage memory space.
Data representation is easy.
Easy access to the large database.
Arrays:
Characteristics of an Array:
An array has various characteristics which are as follows:
Arrays use an index-based data structure which helps to identify each of the
elements in an array easily using the index.
If a user wants to store multiple values of the same data type, then the array
can be utilized efficiently.
An array can also handle complex data structures by storing data in a two-
dimensional array.
An array is also used to implement other data structures like Stacks, Queues,
Heaps, Hash tables, etc.
The search process in an array can be done very easily.
Operations performed on array:
A linked list is a linear data structure in which elements are not stored at
contiguous memory locations. The elements in a linked list are linked using
pointers as shown in the below image:
Types of linked lists:
Singly-linked list
Doubly linked list
Circular linked list
Doubly circular linked list
Linked List
Stack is a linear data structure that follows a particular order in which the
operations are performed. The order is LIFO(Last in first out). Entering and
retrieving data is possible from only one end. The entering and retrieving of
data is also called push and pop operation in a stack. There are different
operations possible in a stack like reversing a stack using recursion, Sorting,
Deleting the middle element of a stack, etc.
Stack
Characteristics of a Stack:
Stack has various different characteristics which are as follows:
Stack is used in many different algorithms like Tower of Hanoi, tree traversal,
recursion, etc.
Stack is implemented through an array or linked list.
It follows the Last In First Out operation i.e., an element that is inserted first will
pop in last and vice versa.
The insertion and deletion are performed at one end i.e. from the top of the
stack.
In stack, if the allocated space for the stack is full, and still anyone attempts to
add more elements, it will lead to stack overflow.
Applications of Stack:
Different applications of Stack are as follows:
The stack data structure is used in the evaluation and conversion of arithmetic
expressions.
It is used for parenthesis checking.
While reversing a string, the stack is used as well.
Stack is used in memory management.
It is also used for processing function calls.
The stack is used to convert expressions from infix to postfix.
The stack is used to perform undo as well as redo operations in word
processors.
The stack is used in virtual machines like JVM.
The stack is used in the media players. Useful to play the next and previous
song.
The stack is used in recursion operations.
Operation performed on stack ;
Queue
Characteristics of a Queue:
The queue has various different characteristics which are as follows:
The queue is a FIFO (First In First Out) structure.
To remove the last element of the Queue, all the elements inserted before the
new element in the queue must be removed.
A queue is an ordered list of elements of similar data types.
Applications of Queue:
Different applications of Queue are as follows:
Queue is used for handling website traffic.
It helps to maintain the playlist in media players.
Queue is used in operating systems for handling interrupts.
It helps in serving requests on a single shared resource, like a printer, CPU task
scheduling, etc.
It is used in the asynchronous transfer of data e.g. pipes, file IO, and sockets.
Queues are used for job scheduling in the operating system.
In social media to upload multiple photos or videos queue is used.
To send an e-mail queue data structure is used.
To handle website traffic at a time queues are used.
In Windows operating system, to switch multiple applications.
Operation performed on queue:
A tree is a non-linear and hierarchical data structure where the elements are
arranged in a tree-like structure. In a tree, the topmost node is called the root
node. Each node contains some data, and data can be of any type. It consists
of a central node, structural nodes, and sub-nodes which are connected via
edges. Different tree data structures allow quicker and easier access to the data
as it is a non-linear data structure. A tree has various terminologies like Node,
Root, Edge, Height of a tree, Degree of a tree, etc.
There are different types of Tree-like
Binary Tree,
Binary Search Tree,
AVL Tree,
B-Tree, etc.
Tree
Characteristics of a Tree:
The tree has various different characteristics which are as follows:
A tree is also known as a Recursive data structure.
In a tree, the Height of the root can be defined as the longest path from the root
node to the leaf node.
In a tree, one can also calculate the depth from the top to any node. The root
node has a depth of 0.
Applications of Tree:
Different applications of Tree are as follows:
Heap is a tree data structure that is implemented using arrays and used to
implement priority queues.
B-Tree and B+ Tree are used to implement indexing in databases.
Syntax Tree helps in scanning, parsing, generation of code, and evaluation of
arithmetic expressions in Compiler design.
K-D Tree is a space partitioning tree used to organize points in K-dimensional
space.
Spanning trees are used in routers in computer networks.
Operation performed on tree:
A tree is a non-linear data structure that consists of nodes connected by edges.
Here are some common operations performed on trees:
Insertion: New nodes can be added to the tree to create a new branch or to
increase the height of the tree.
Deletion: Nodes can be removed from the tree by updating the references of
the parent node to remove the reference to the current node.
Search: Elements can be searched for in a tree by starting from the root node
and traversing the tree based on the value of the current node until the desired
node is found.
Traversal: The elements in a tree can be traversed in several different ways,
including in-order, pre-order, and post-order traversal.
Height: The height of the tree can be determined by counting the number of
edges from the root node to the furthest leaf node.
Depth: The depth of a node can be determined by counting the number of
edges from the root node to the current node.
Balancing: The tree can be balanced to ensure that the height of the tree is
minimized and the distribution of nodes is as even as possible.
These are some of the most common operations performed on trees. The
specific operations and algorithms used may vary based on the requirements of
the problem and the programming language used. Trees are commonly used in
applications such as searching, sorting, and storing hierarchical data.
Real-Life Applications of Tree:
In real life, tree data structure helps in Game Development.
It also helps in indexing in databases.
A Decision Tree is an efficient machine-learning tool, commonly used in
decision analysis. It has a flowchart-like structure that helps to understand data.
Domain Name Server also uses a tree data structure.
The most common use case of a tree is any social networking site.
Want to get started with Tree? You can try out our curated articles and lists for
the best practice:
Explain Graph:
A graph is a non-linear data structure that consists of vertices (or nodes) and
edges. It consists of a finite set of vertices and set of edges that connect a pair
of nodes. The graph is used to solve the most challenging and complex
programming problems. It has different terminologies which are Path, Degree,
Adjacent vertices, Connected components, etc.
Graph
Characteristics of Graph:
The graph has various different characteristics which are as follows:
The maximum distance from a vertex to all the other vertices is considered the
Eccentricity of that vertex.
The vertex having minimum Eccentricity is considered the central point of the
graph.
The minimum value of Eccentricity from all vertices is considered the radius of a
connected graph.
Applications of Graph:
Different applications of Graphs are as follows:
The graph is used to represent the flow of computation.
It is used in modeling graphs.
The operating system uses Resource Allocation Graph.
Also used in the World Wide Web where the web pages represent the nodes.
Operation performed on Graph:
A graph is a non-linear data structure consisting of nodes and edges. Here are
some common operations performed on graphs:
Add Vertex: New vertices can be added to the graph to represent a new node.
Add Edge: Edges can be added between vertices to represent a relationship
between nodes.
Remove Vertex: Vertices can be removed from the graph by updating the
references of adjacent vertices to remove the reference to the current vertex.
Remove Edge: Edges can be removed by updating the references of the
adjacent vertices to remove the reference to the current edge.
Shortest Path: The shortest path between two vertices can be determined
using algorithms such as Dijkstra’s algorithm or A* algorithm.
Abstract Data type (ADT) is a type (or class) for objects whose behavior is
defined by a set of values and a set of operations. The definition of ADT only
mentions what operations are to be performed but not how these operations will
be implemented. It does not specify how data will be organized in memory and
what algorithms will be used for implementing the operations. It is called
“abstract” because it gives an implementation-independent view.
Features of ADT:
Abstract data types (ADTs) are a way of encapsulating data and
operations on that data into a single unit. Some of the key features of
ADTs include:
Abstraction: The user does not need to know the implementation of the
data structure only essentials are provided.
Better Conceptualization: ADT gives us a better conceptualization of the
real world.
Robust: The program is robust and has the ability to catch errors.
Encapsulation: ADTs hide the internal details of the data and provide a
public interface for users to interact with the data. This allows for easier
maintenance and modification of the data structure.
Data Abstraction: ADTs provide a level of abstraction from the
implementation details of the data. Users only need to know the operations
that can be performed on the data, not how those operations are
implemented.
Data Structure Independence: ADTs can be implemented using different
data structures, such as arrays or linked lists, without affecting the
functionality of the ADT.
Information Hiding: ADTs can protect the integrity of the data by allowing
access only to authorized users and operations. This helps prevent errors
and misuse of the data.
Modularity: ADTs can be combined with other ADTs to form larger, more
complex data structures. This allows for greater flexibility and modularity in
programming.
Advantages:
Encapsulation: ADTs provide a way to encapsulate data and operations
into a single unit, making it easier to manage and modify the data structure.
Abstraction: ADTs allow users to work with data structures without having
to know the implementation details, which can simplify programming and
reduce errors.
Data Structure Independence: ADTs can be implemented using different
data structures, which can make it easier to adapt to changing needs and
requirements.
Information Hiding: ADTs can protect the integrity of data by controlling
access and preventing unauthorized modifications.
Modularity: ADTs can be combined with other ADTs to form more complex
data structures, which can increase flexibility and modularity in
programming.
Disadvantages:
Overhead: Implementing ADTs can add overhead in terms of memory and
processing, which can affect performance.
Complexity: ADTs can be complex to implement, especially for large and
complex data structures.
Learning Curve: Using ADTs requires knowledge of their implementation
and usage, which can take time and effort to learn.
Limited Flexibility: Some ADTs may be limited in their functionality or may
not be suitable for all types of data structures.
Cost: Implementing ADTs may require additional resources and investment,
which can increase the cost of development.
Algorithms are a set of instructions that helps us to get the expected output.
To judge the efficiency of an algorithm, we need an analyzing tool.
The theoretical analysis takes all the possible input into consideration and
assumes that the time taken for executing the basic operation is constant.
The primitive or basic operations that take constant time are given as
follows.
Declarations
Assignments
Arithmetic Operations
Comparison statements
Calling functions
Return statement
i =0 #declaration statement
count = 0 #declaration statement
while(i < N): #comparison statement
count +=1 #arithmetic operation
Operations Frequency
Declaration statement 2
Comparison statement N
Arithmetic operations N
Since the arithmetic operation is within the while loop, it will be executed
for N number of times. Now the total operations performed is 2 + N + N = 2
+ 2N. When the value of N is large, then the constant values don’t make any
difference and are insignificant. Hence, we ignore the constant values.
An algorithm can perform differently based on the given input. There are
three cases to analyze an algorithm. They are worst case, average case, and the
best case.
list_1 = [4, 6, 7, 1, 5, 2]
linear_search(list_1, 6)
#Output: 6 is at index 1
Best case: The best case is the minimum time required by an algorithm for
the execution of a program. If we are searching for an element in a list, and it is
present at the 0th index, then the number of comparisons to be performed is 1 time.
The time complexity will be O(1) i.e. constant which is the best case time.
Worst case: The worst case is the maximum time required by an algorithm
for the execution of a program. If we are searching for an element in the list of
length “n”, and it’s not present in the list or is present at the ending index, then the
number of comparisons to be performed will be “n” times.
Therefore, the time complexity will be O(n), which is the worst case time.
Average case: In average case analysis, we take the average time of all the
possible inputs. The average case time is given by,
Average case time = All possible cases time / Number of cases
The big O notation measures the upper bound on the running time of an
algorithm. Big O time complexity describes the worst case. Consider a
function f(n). We choose another function g(n) such that f(n) <= c.g(n), for n >
n0 and c > 0. Here, c and n0 represent constant values. If the equation is
satisfied, f(n) = O(g(n)).
Big Omega
The big Omega measures the lower bound on the running time of an
algorithm. Big Omega time complexity describes the best case. Consider a
function f(n). We choose another function g(n) such that c.g(n) <= f(n), for n >
n0 and c > 0. Here, c and n0 represent constant. If the equation is satisfied, f(n) =
Omega(g(n)).
Big Theta
The big Theta measures the time between the upper and the lower bound of
an algorithm. Big Theta describes the time complexity within the bounds of the
best and worst case.
Consider a function f(n). We choose another function g(n) such that c1.g(n)
<= f(n) <= c2.g(n), for n > n0 and c1, c2 > 0. Here, c1, c2, and n0 represent
constant values. If the equation is satisfied, f(n) = Theta(g(n)).
Constant O(1)
Logarithm O(log N)
Linear O(N)
Quadratic O(N2)
Cubic O(N3)
Exponential O(2N)
We can analyze the efficiency of an algorithm based on its performance as
the size of input grows. The time complexity of an algorithm is commonly
expressed in Big O notation
We use the worst-case time complexity because it ensures that the running
time of an algorithm will not exceed the worst-case time. The performance
classification is given as follows.
For the constant time complexity, the running time of an algorithm doesn’t
change and remains constant irrespective of the size of the input data. Consider a
list list_1 = [4, 6, 7, 1, 5, 2]. Now, accessing a specific element using indexing
takes a constant amount of time.
list_1 = [4, 6, 7, 1, 5, 2]
print(list_1[4]) #accessing element at the 4th index.
#Output: 5
O(log N) – Logarithm Time Complexity
Consider a sorted list list_1 = [1, 2, 3, 6, 7]. We use binary search to find the
key element 6. The binary search algorithm divides the input size in half in each
iteration. Therefore, the time complexity of the algorithm reduces to O(log N).
def binary_search(list_1, low, high, key):
while(low <= high):
mid = low + (high - low)//2
if list_1[mid] == key:
return mid
break
elif list_1[mid] < key:
low = mid + 1
else:
high = mid - 1
return "Not Found"
list_1 = [1, 2, 3, 6, 7]
low = 0
high = len(list_1) - 1
key = 6
ind = binary_search(list_1, low, high, key)
print("Element 6 is present at index: ", ind)
#Output: Element 6 is present at index: 3
O(N) – Linear Time Complexity
Consider a matrix mat_1 = [[1, 2, 3], [1, 1, 1], [5, 7, 8]]. We initialize a
variable add with 0 and use a nested for loop to traverse each element of the
matrix. In each iteration, we add the element to the variable add.
mat_1 = [[1, 2, 3], [1, 1, 1], [5, 7, 8]]
add = 0
for i in range(len(mat_1)):
for j in range(len(mat_1[0])):
add += mat_1[i][j]
print(add)
#Ouput: 29
O(2N) – Exponential Time Complexity
For example, the space complexity for a list of length N will be O(N) and
the space complexity for a matrix of size N x N will be O(N2).
In recursive algorithms, the stack space is also taken into account. For
example, consider the following program. In each iteration, we call the
function my_func() recursively, and each recursive call adds a layer to the stack
memory. Hence, the space complexity is O(N).
def my_ func(num):
if num > 0:
my_func(num - 1)
print(num, end = " ")
my_func(5)
However, N calls don’t mean that the space complexity will be O(N). Let us
consider two functions, func_1 and func_2. In the function func_1, we call the
function func_2 for N number of times. But, all these calls will not be added to the
stack memory simultaneously. Thus, we need only O(1) space.
def func_1(n):
for i in range(n):
add = func_2(i)
print(add)
def func_2(x):
return x + 1
Adding vs Multiplying Time Complexities
Let us look at the following example. There are two sets of for loops(loop A
and loop B) executing for a range of M and N respectively.
#Example 2
for i in range(M): # loop A
for j in range(N): #loop B
print(i, j)
Time Complexity of Recursive Calls
my_func(5)
Let us assume that the total time taken by my_func() is T(n). Hence, we can
say that the recursive statement my_func(n-1) will take T(n-1) time. The other
basic operations and statements take 1 unit of time.
Now, we can write
T(n) = O(1) + O(1) + T(n-1)
T(n) = T(n-1) + O(1)
#Taking constant time as 1
T(n) = T(n-1) + 1
The statements inside the if block will be executed, if n>0, and the time will
be T(n-1) +1. If n=0, then only the conditional statement will be tested and the time
will be 1 unit. Thus, the recurrence relation for T(n) is given as
T(n) = T(n-1) + 1, if n > 0
= 1 , if n = 0
Consider the example given below. We define a function f(n) that takes a
number n as input and calls the function recursively 2 times.
def f(n):
if n <= 1:
return 1
return f(n-1) + f(n-2)
f(4)
When multiple recursive calls are made, we can represent time complexity
as O(branchesdepth). Here, branches represents number of children for each node i.e.
number of recursive call in each iteration and depth represents parameter in the
recursive function.
1. Implementation Method
2. Design Method
3. Design Approaches
4. Other Classifications
1. if (E = 0) then
2. exit
3. else
4. set l ← l - 1.
For each iteration, the bubble sort will compare up to the last unsorted element.
Once all the elements get sorted in the ascending order, the algorithm will get
terminated.
Consider the following example of an unsorted array that we will sort with the help of
the Bubble Sort algorithm.
Initially,
Example2
o Compare a2 and a3
o Compare a3 and a4
Here a3 > a4, so we will again swap both of them.
Pass 2:
o Compare a0 and a1
o Compare a1 and a2
o Compare a2 and a3
In this case, a2 > a3, so both of them will get swapped.
Pass 3:
o Compare a0 and a1
o Compare a1 and a2
o Compare a0 and a1
Logic: If we are given n elements, then in the first pass, it will do n-1 comparisons; in the
second pass, it will do n-2; in the third pass, it will do n-3 and so on. Thus, the total
number of comparisons can be found by;
Therefore, the bubble sort algorithm encompasses a time complexity of O(n2) and a
space complexity of O(1) because it necessitates some extra memory space for temp
variable for swapping.
Time Complexities:
o Best Case Complexity: The bubble sort algorithm has a best-case time complexity
of O(n) for the already sorted array.
o Average Case Complexity: The average-case time complexity for the bubble sort
algorithm is O(n2), which happens when 2 or more elements are in jumbled, i.e., neither
in the ascending order nor in the descending order.
o Worst Case Complexity: The worst-case time complexity is also O(n2), which occurs
when we sort the descending order of an array into the ascending order.
ADVERTISEMENT
Selection Sort
The selection sort enhances the bubble sort by making only a single swap for each pass
through the rundown. In order to do this, a selection sort searches for the biggest value
as it makes a pass and, after finishing the pass, places it in the best possible area.
Similarly, as with a bubble sort, after the first pass, the biggest item is in the right place.
After the second pass, the following biggest is set up. This procedure proceeds and
requires n-1 goes to sort n item since the last item must be set up after the (n-1) th
pass.
A[]=(7,4,3,6,5).
A [] =
1st Iteration:
Set minimum = 7
o Compare a0 and a1
o Compare a1 and a2
o Compare a2 and a4
2nd Iteration:
Set minimum = 4
o Compare a1 and a2
o Compare a1 and a4
Since the minimum is already placed in the correct position, so there will be no
swapping.
3rd Iteration:
Set minimum = 7
o Compare a2 and a3
As, a2 > a3, set minimum = 6.
o Compare a3 and a4
Since 5 is the smallest element among the leftover unsorted elements, so we will swap 7
and 5.
4th Iteration:
Set minimum = 6
o Compare a3 and a4
Since the minimum is already placed in the correct position, so there will be no
swapping.
Complexity Analysis of Selection Sort
Input: Given n input elements.
Logic: If we are given n elements, then in the first pass, it will do n-1 comparisons; in the
second pass, it will do n-2; in the third pass, it will do n-3 and so on. Thus, the total
number of comparisons can be found by;
Therefore, the selection sort algorithm encompasses a time complexity of O(n2) and a
space complexity of O(1) because it necessitates some extra memory space for temp
variable for swapping.
Time Complexities:
o Best Case Complexity: The selection sort algorithm has a best-case time complexity
of O(n2) for the already sorted array.
o Average Case Complexity: The average-case time complexity for the selection sort
algorithm is O(n2), in which the existing elements are in jumbled ordered, i.e., neither in
the ascending order nor in the descending order.
o Worst Case Complexity: The worst-case time complexity is also O(n2), which occurs
when we sort the descending order of an array into the ascending order.
In the selection sort algorithm, the time complexity is O(n2) in all three cases. This is
because, in each step, we are required to find minimum elements so that it can be
placed in the correct position. Once we trace the complete array, we will get our
minimum element.
Linear Search
1. Start from the first element, and compare the key=6 with
each element x.
Insertion sort
Insertion sort is one of the simplest sorting algorithms for the reason that it sorts a
single element at a particular instance. It is not the best sorting algorithm in terms of
performance, but it's slightly more efficient than selection sort and bubble sort in
practical scenarios. It is an intuitive sorting technique.
Let's consider the example of cards to have a better understanding of the logic behind
the insertion sort.
Suppose we have a set of cards in our hand, such that we want to arrange these cards in
ascending order. To sort these cards, we have a number of intuitive ways.
One such thing we can do is initially we can hold all of the cards in our left hand, and we
can start taking cards one after other from the left hand, followed by building a sorted
arrangement in the right hand.
Assuming the first card to be already sorted, we will select the next unsorted card. If the
unsorted card is found to be greater than the selected card, we will simply place it on
the right side, else to the left side. At any stage during this whole process, the left hand
will be unsorted, and the right hand will be sorted.
In the same way, we will sort the rest of the unsorted cards by placing them in the
correct position. At each iteration, the insertion algorithm places an unsorted element at
its right place.
2. Now, we will move on to the third element and compare it with the left-hand side
elements. If it is the smallest element, then we will place the third element at the first
index.
Else if it is greater than the first element and smaller than the second element, then we
will interchange its position with the third element and place it after the first element.
After doing this, we will have our first three elements in a sorted manner.
3. Similarly, we will sort the rest of the elements and place them in their correct position.
Consider the following example of an unsorted array that we will sort with the help of
the Insertion Sort algorithm.
Initially,
1st Iteration:
Set key = 22
Compare a1 with a0
Set key = 63
3rd Iteration:
Set key = 14
4th Iteration:
Set key = 55
Set key = 36
Since a5 < a2, so we will place the elements in their correct positions.
Logic: If we are given n elements, then in the first pass, it will make n-1 comparisons; in
the second pass, it will do n-2; in the third pass, it will do n-3 and so on. Thus, the total
number of comparisons can be found by;
Output;
(n-1) + (n-2) + (n-3) + (n-4) + ...... + 1
Sum=
i.e., O(n2)
Therefore, the insertion sort algorithm encompasses a time complexity of O(n2) and a
space complexity of O(1) because it necessitates some extra memory space for
a key variable to perform swaps.
Time Complexities:
o Best Case Complexity: The insertion sort algorithm has a best-case time complexity
of O(n) for the already sorted array because here, only the outer loop is running n times,
and the inner loop is kept still.
o Average Case Complexity: The average-case time complexity for the insertion sort
algorithm is O(n2), which is incurred when the existing elements are in jumbled order,
i.e., neither in the ascending order nor in the descending order.
o Worst Case Complexity: The worst-case time complexity is also O(n2), which occurs
when we sort the ascending order of an array into the descending order.
In this algorithm, every individual element is compared with the rest of the elements, due
to which n-1 comparisons are made for every n th element.
The insertion sort algorithm is highly recommended, especially when a few elements are
left for sorting or in case the array encompasses few elements.
Space Complexity
The insertion sort encompasses a space complexity of O(1) due to the usage of an extra
variable key.
Divide and Conquer algorithm consists of a dispute using the following three steps.
1. Binary Search: The binary search algorithm is a searching algorithm, which is also called
a half-interval search or logarithmic search. It works by comparing the target value with
the middle element existing in a sorted array. After making the comparison, if the value
differs, then the half that cannot contain the target will eventually eliminate, followed by
continuing the search on the other half. We will again consider the middle element and
compare it with the target value. The process keeps on repeating until the target value is
met. If we found the other half to be empty after ending the search, then it can be
concluded that the target is not present in the array.
2. Quicksort: It is the most efficient sorting algorithm, which is also known as partition-
exchange sort. It starts by selecting a pivot value from an array followed by dividing the
rest of the array elements into two sub-arrays. The partition is made by comparing each
of the elements with the pivot value. It compares whether the element holds a greater
value or lesser value than the pivot and then sort the arrays recursively.
3. Merge Sort: It is a sorting algorithm that sorts an array by making comparisons. It starts
by dividing an array into sub-array and then recursively sorts each of them. After the
sorting is done, it merges them back.
4. Closest Pair of Points: It is a problem of computational geometry. This algorithm
emphasizes finding out the closest pair of points in a metric space, given n points, such
that the distance between the pair of points should be minimal.
5. Strassen's Algorithm: It is an algorithm for matrix multiplication, which is named after
Volker Strassen. It has proven to be much faster than the traditional algorithm when
works on large matrices.
6. Cooley-Tukey Fast Fourier Transform (FFT) algorithm: The Fast Fourier Transform
algorithm is named after J. W. Cooley and John Turkey. It follows the Divide and Conquer
Approach and imposes a complexity of O(nlogn).
7. Karatsuba algorithm for fast multiplication: It is one of the fastest multiplication
algorithms of the traditional time, invented by Anatoly Karatsuba in late 1960 and got
published in 1962. It multiplies two n-digit numbers in such a way by reducing it to at
most single-digit.
Suppose we have an array A, such that our main concern will be to sort the subsection,
which starts at index p and ends at index r, represented by A[p..r].
Divide
ADVERTISEMENT
Conquer
After splitting the arrays into two halves, the next step is to conquer. In this step, we
individually sort both of the subarrays A[p..q] and A[q+1, r]. In case if we did not reach
the base situation, then we again follow the same procedure, i.e., we further segment
these subarrays followed by sorting them separately.
Combine
As when the base step is acquired by the conquer step, we successfully get our sorted
subarrays A[p..q] and A[q+1, r], after which we merge them back to form a new sorted
array [p..r].
1. ALGORITHM-MERGE SORT
2. 1. If p<r
3. 2. Then q → ( p+ r)/2
4. 3. MERGE-SORT (A, p, q)
5. 4. MERGE-SORT ( A, q+1,r)
6. 5. MERGE ( A, p, q, r)
As you can see in the image given below, the merge sort algorithm recursively divides
the array into halves until the base condition is met, where we are left with only 1
element in the array. And then, the merge function picks up the sorted sub-arrays and
merge them back to sort the entire array.
The merge sort algorithm upholds three pointers, i.e., one for both of the two arrays and
the other one to preserve the final sorted array's current index.
A= (36,25,40,2,7,80,15)
Step1: The merge sort algorithm iteratively divides an array into equal halves until we
achieve an atomic value. In case if there are an odd number of elements in an array,
then one of the halves will have more elements than the other half.
Step2: After dividing an array into two subarrays, we will notice that it did not hamper
the order of elements as they were in the original array. After now, we will further divide
these two arrays into other halves.
Step3: Again, we will divide these arrays until we achieve an atomic value, i.e., a value
that cannot be further divided.
Step4: Next, we will merge them back in the same way as they were broken down.
Step5: For each list, we will first compare the element and then combine them to form a
new sorted list.
Step6: In the next iteration, we will compare the lists of two data values and merge
them back into a list of found data values, all placed in a sorted manner.
Hence the array is sorted.
But we ignore '-1' because the element will take some time to be copied in merge lists.
So T (n) = 2T + n...equation 1
Note: Stopping Condition T (1) =0 because at last, there will be only 1 element left
that need to be copied, and there will be no comparison.
log n=log2i
logn= i log2
=i
log2n=i
From 6 equation
Best Case Complexity: The merge sort algorithm has a best-case time complexity
of O(n*log n) for the already sorted array.
Average Case Complexity: The average-case time complexity for the merge sort
algorithm is O(n*log n), which happens when 2 or more elements are jumbled, i.e.,
neither in the ascending order nor in the descending order.
Worst Case Complexity: The worst-case time complexity is also O(n*log n), which
occurs when we sort the descending order of an array into the ascending order.
Quick sort
It is an algorithm of Divide & Conquer type.
Divide: Rearrange the elements and split arrays into two sub-arrays and an element in
between search that each element in left sub array is less than or equal to the average
element and each element in the right sub- array is larger than the middle element.
Algorithm:
Partition Algorithm:
Partition algorithm rearranges the sub arrays in a place.
Let 44 be the Pivot element and scanning done from right to left
22 33 11 55 77 90 40 60 99 44
88
Now comparing 44 to the left side element and the element must be greater than 44
then swap them. As 55 are greater than 44 so swap them.
22 33 11 44 77 90 40 60 99 55
88
Recursively, repeating steps 1 & steps 2 until we get two lists one left from pivot
element 44 & one right from pivot element.
22 33 11 40 77 90 44 60 99 55
88
22 33 11 40 44 90 77 60 99 55
88
Now, the element on the right side and left side are greater than and smaller
than 44 respectively.
And these sub lists are sorted under the same process as above done.
SORTED LISTS
Worst Case Analysis: It is the case when items are already in sorted form and we try to
sort them again. This will takes lots of time and space.
Equation:
1. T (n) =T(1)+T(n-1)+n
N: the number of comparisons required to identify the exact position of itself (every
element)
ADVERTISEMENT
ADVERTISEMENT
If we compare first element pivot with other, then there will be 5 comparisons.
Note: for making T (n-4) as T (1) we will put (n-1) in place of '4' and if
We put (n-1) in place of 4 then we have to put (n-2) in place of 3 and (n-3)
In place of 2 and so on.
Because at last there is only one element left and no comparison is required.
Then,
Pivot element will do n comparison and we are doing average case so,
= n+1 + x2 (T(0)+T(1)+T(2)+...T(n-2)+T(n-1))
Put n=n-1 in eq 1
Put n=n-1 in eq 3
Put 4 eq in 3 eq
Put n=n-2 in eq 3
Put 6 eq in 5 eq
Put n=n-3 in eq 3
Put 8 eq in 7 eq
From 10 eq
Multiply and divide the last term by 2
Consider an array arr[] = {2, 5, 8, 12, 16, 23, 38, 56, 72, 91}, and the target =
23.
First Step: Calculate the mid and compare the mid element with the key. If the
key is less than mid element, move to left and if it is greater than the mid then
move search space to the right.
Key (i.e., 23) is greater than current mid element (i.e., 16). The search space
Key is less than the current mid 56. The search space moves to the left.
Binary Search Algorithm : Compare key with 56
Second Step: If the key matches the value of the mid element, the element is
found and stop search.
Fibonacchi(N)
=0 for n=0
=1 for n=1
def fib_recur(x):
if x == 0:
return 0
elif x == 1:
return 1
else:
if __name__ == "__main__":
result = fib_recur(10)
print(result)
Now as you can see in the picture above while you are calculating Fibonacci(4)
you need Fibonacci(3) and Fibonacci(2), Now for Fibonacci(3), you need Fibonacci
(2) and Fibonacci (1) but you notice you have calculated Fibonacci(2) while
calculating Fibonacci(4) and again calculating it. So we are solving many sub-
problems again and again.
Time Complexity:
T(n) = T(n-1) + T(n-2) + 1 = 2n = O(2n)
The algorithm never reverses the earlier decision even if the choice is wrong.
It works in a top-down approach.
This algorithm may not produce the best result for all the problems. It's
because it always goes for the local best choice to produce the global best
result.
However, we can determine if the algorithm can be used with any problem if
the problem has the following properties:
2. Optimal Substructure
If the optimal overall solution to the problem corresponds to the optimal
solution to its subproblems, then the problem can be solved using a greedy
approach. This property is called optimal substructure.
Greedy Approach
1. Let's start with the root node 20. The weight of the right child is 3 and the
weight of the left child is 2.
2. Our problem is to find the largest path. And, the optimal solution at the
moment is 3. So, the greedy algorithm will choose 3.
3. Finally the weight of an only child of 3 is 1. This gives us our final result 20 +
3 + 1 = 24 .
However, it is not the optimal solution. There is another path that carries more
weight ( 20 + 2 + 10 = 32 ) as shown in the image below.
Longest path
Greedy Algorithm
1. To begin with, the solution set (containing answers) is empty.
2. At each step, an item is added to the solution set until a solution is reached.
Amount: $18
$5 coin
$2 coin
$1 coin
Solution:
1. Create an empty solution-set = { } . Available coins are {5, 2, 1} .
3. Always select the coin with the largest value (i.e. 5) until the sum > 18 . (When
we select the largest value at each step, we hope to reach the destination
faster. This concept is called greedy choice property.)
4. In the first iteration, solution-set = {5} and sum = 5 .
5, 2, 1} .
Different Types of Greedy Algorithm
Selection Sort
Knapsack Problem
Minimum Spanning Tree
Single-Source Shortest Path Problem
Job Scheduling Problem
Linked List
class Node:
self.data = data
self.next = None
new_node = Node(data)
if self.head is None:
self.head = new_node
return
else:
new_node.next = self.head
self.head = new_node
new_node = Node(data)
current_node = self.head
position = 0
if position == index:
self.insertAtBegin(data)
else:
current_node = current_node.next
if current_node != None:
new_node.next = current_node.next
current_node.next = new_node
else:
new_node = Node(data)
if self.head is None:
self.head = new_node
return
current_node = self.head
while(current_node.next):
current_node = current_node.next
current_node.next = new_node
# at given position
current_node = self.head
position = 0
if position == index:
current_node.data = val
else:
position = position+1
current_node = current_node.next
if current_node != None:
current_node.data = val
else:
def remove_first_node(self):
if(self.head == None):
return
self.head = self.head.next
def remove_last_node(self):
if self.head is None:
return
current_node = self.head
while(current_node.next.next):
current_node = current_node.next
current_node.next = None
if self.head == None:
return
current_node = self.head
position = 0
if position == index:
self.remove_first_node()
else:
position = position+1
current_node = current_node.next
if current_node != None:
current_node.next = current_node.next.next
else:
current_node = self.head
if current_node.data == data:
self.remove_first_node()
return
while current_node is not None and current_node.next.data != data:
current_node = current_node.next
if current_node is None:
return
else:
current_node.next = current_node.next.next
def printLL(self):
current_node = self.head
while(current_node):
print(current_node.data)
current_node = current_node.next
def sizeOfLL(self):
size = 0
if(self.head):
current_node = self.head
while(current_node):
size = size+1
current_node = current_node.next
return size
else:
return 0
The node contains a pointer to the next node means that the node stores the
address of the next node in the sequence. A single linked list allows the
traversal of data only in one way. Below is the image for the same:
# Node of a singly linked list
class Node:
self.data = data
self.next = None
# structure of Node
class Node:
self.data = data
self.next = None
class LinkedList:
def __init__(self):
self.head = None
self.last_node = None
# if linked list is empty then last_node will be none so in if condition head wil
if self.last_node is None:
self.head = Node(data)
self.last_node = self.head
else:
self.last_node.next = Node(data)
self.last_node = self.last_node.next
# function to print the content of linked list
def display(self):
current = self.head
current = current.next
print()
# Driver code
if __name__ == '__main__':
L = LinkedList()
L.append(1)
L.append(2)
L.append(3)
# displaying elements of linked list
L.display()
Output
1 2 3
Time Complexity: O(N)
Auxiliary Space: O(N)
2. Doubly Linked List
A doubly linked list or a two-way linked list is a more complex type of linked list
that contains a pointer to the next as well as the previous node in sequence.
Therefore, it contains three parts of data, a pointer to the next node, and a
pointer to the previous node. This would enable us to traverse the list in the
backward direction as well. Below is the image for the same:
# structure of Node
class Node:
self.previous = None
self.data = data
self.next = None
# structure of Node
class Node:
self.previous = None
self.data = data
self.next = None
class DoublyLinkedList:
def __init__(self):
self.head = None
self.start_node = None
self.last_node = None
if self.last_node is None:
self.head = Node(data)
self.last_node = self.head
else:
new_node = Node(data)
self.last_node.next = new_node
new_node.previous = self.last_node
new_node.next = None
self.last_node = new_node
# function to printing and traversing the content of doubly linked list from left to
def display(self, Type):
if Type == 'Left_To_Right':
current = self.head
current = current.next
print()
else:
current = self.last_node
current = current.previous
print()
# Driver code
if __name__ == '__main__':
L = DoublyLinkedList()
L.append(1)
L.append(2)
L.append(3)
L.append(4)
L.display('Left_To_Right')
L.display('Right_To_Left')
Output
Created DLL is:
Traversal in forward direction
1 7 6
Traversal in reverse direction
6 7 1
Time Complexity:
The time complexity of the push() function is O(1) as it performs constant-time
operations to insert a new node at the beginning of the doubly linked list. The
time complexity of the printList() function is O(n) where n is the number of
nodes in the doubly linked list. This is because it traverses the entire list twice,
once in the forward direction and once in the backward direction. Therefore, the
overall time complexity of the program is O(n).
Space Complexity:
The space complexity of the program is O(n) as it uses a doubly linked list to
store the data, which requires n nodes. Additionally, it uses a constant amount
of auxiliary space to create a new node in the push() function. Therefore, the
overall space complexity of the program is O(n).
3. Circular Linked List
A circular linked list is that in which the last node contains the pointer to the first
node of the list.
While traversing a circular linked list, we can begin at any node and traverse the
list in any direction forward and backward until we reach the same node we
started. Thus, a circular linked list has no beginning and no end. Below is the
image for the same:
Below is the structure of the Circular Linked List:
# structure of Node
class Node:
self.data = data
self.next = None
# Circular LL
# structure of Node
class Node:
self.next = None
class CircularLinkedList:
def __init__(self):
self.head = None
self.last_node = None
if self.last_node is None:
self.head = Node(data)
self.last_node = self.head
else:
self.last_node.next = Node(data)
self.last_node = self.last_node.next
self.last_node.next = self.head
def display(self):
current = self.head
current = current.next
if current == self.head:
break
print()
# Driver code
if __name__ == '__main__':
L = CircularLinkedList()
L.append(12)
L.append(56)
L.append(2)
L.append(11)
# Function call
L.display()
Output
Contents of Circular Linked List
11 2 56 12
Time Complexity:
Insertion at the beginning of the circular linked list takes O(1) time complexity.
Traversing and printing all nodes in the circular linked list takes O(n) time
complexity where n is the number of nodes in the linked list.
Therefore, the overall time complexity of the program is O(n).
Auxiliary Space:
The space required by the program depends on the number of nodes in the
circular linked list.
In the worst-case scenario, when there are n nodes, the space complexity of
the program will be O(n) as n new nodes will be created to store the data.
Additionally, some extra space is required for the temporary variables and the
function calls.
Therefore, the auxiliary space complexity of the program is O(n).
4. Doubly Circular linked list
A Doubly Circular linked list or a circular two-way linked list is a more complex
type of linked list that contains a pointer to the next as well as the previous node
in the sequence. The difference between the doubly linked and circular doubly
list is the same as that between a singly linked list and a circular linked list. The
circular doubly linked list does not contain null in the previous field of the first
node. Below is the image for the same:
Below is the structure of the Doubly Circular Linked List:
# structure of Node
class Node:
self.previous = None
self.data = data
self.next = None
class Node:
self.previous = None
self.data = data
self.next = None
class DoublyLinkedList:
def __init__(self):
self.head = None
self.start_node = None
self.last_node = None
if self.last_node is None:
self.head = Node(data)
self.last_node = self.head
else:
new_node = Node(data)
self.last_node.next = new_node
new_node.previous = self.last_node
new_node.next = self.head
self.head.previous = new_node
self.last_node = new_node
if Type == 'Left_To_Right':
current = self.head
current = current.next
if current == self.head:
break
print()
else:
current = self.last_node
current = current.previous
if current == self.last_node.next:
break
print()
if __name__ == '__main__':
L = DoublyLinkedList()
L.append(1)
L.append(2)
L.append(3)
L.append(4)
L.display('Left_To_Right')
L.display('Right_To_Left')
Output
Created circular doubly linked list is:
Traversal in forward direction
7 4 5
Traversal in reverse direction
5 4 7
Time Complexity:
Insertion at the beginning of a doubly circular linked list takes O(1) time
complexity.
Traversing the entire doubly circular linked list takes O(n) time complexity,
where n is the number of nodes in the linked list.
Therefore, the overall time complexity of the program is O(n).
Auxiliary space:
The program uses a constant amount of auxiliary space, i.e., O(1), to create
and traverse the doubly circular linked list. The space required to store the
linked list grows linearly with the number of nodes in the linked list.
Therefore, the overall auxiliary space complexity of the program is O(1).
SLL nodes contains 2 field -data field and next DLL nodes contains 3 fields -data field,
link field. a previous link field and a next link field.
Singly linked list (SLL) Doubly linked list (DLL)
The SLL occupies less memory than DLL as it The DLL occupies more memory than
has only 2 fields. SLL as it has 3 fields.
We mostly prefer to use singly linked list for We can use a doubly linked list to
the execution of stacks. execute heaps and stacks, binary trees.
Input : [{}{}(]
Output : Unbalanced
Approach #1: Using stack One approach to check balanced parentheses is to
use stack. Each time, when an open parentheses is encountered push it in the
stack, and when closed parenthesis is encountered, match it with the top of
stack and pop it. If stack is empty at the end, return Balanced otherwise,
Unbalanced.
# Python3 code to Check for
# balanced parentheses in an expression
open_list = ["[","{","("]
close_list = ["]","}",")"]
# Driver code
string = "{[]{()}}"
print(string,"-", check(string))
string = "[{}{})(]"
print(string,"-", check(string))
string = "((()"
print(string,"-",check(string))
Output:
{[]{()}} - Balanced
[{}{})(] - Unbalanced
((() - Unbalanced
Time Complexity: O(n), The time complexity of this algorithm is O(n), where n
is the length of the string. This is because we are iterating through the string
and performing constant time operations on the stack.
Auxiliary Space: O(n), The space complexity of this algorithm is O(n) as well,
since we are storing the contents of the string in a stack, which can grow up to
the size of the string.
Evaluation of postfix expression using stack in
Python
Unlike infix expression postfix expression don’t have any parenthesis it has only
two characters that are Operator And operand Using stack, we can easily evaluate
postfix expression there will be only two scenarios. We have to scan string from
left to right. If we encounter operator while we are scanning the string then we
will pop two elements from the stack and perform the operation of current
operator and we will push back the result into the stack. And if we encounter
operand just push that operand into the stack.
For i in post:
If i is an operand:
Push i in stack.
Else:
3. Finally pop out the result from the stack.
Evaluate below postfix expression 234+*6-
E
valuate postfix expression using stack in python
class evaluate_postfix:
def __init__(self):
self.items=[]
self.size=-1
def isEmpty(self):
return self.items==[]
def push(self,item):
self.items.append(item)
self.size+=1
def pop(self):
if self.isEmpty():
return 0
else:
self.size-=1
return self.items.pop()
def seek(self):
if self.isEmpty():
return False
else:
return self.items[self.size]
def evalute(self,expr):
for i in expr:
if i in '0123456789':
self.push(i)
else:
op1=self.pop()
op2=self.pop()
result=self.cal(op2,op1,i)
self.push(result)
return self.pop()
def cal(self,op2,op1,i):
if i is '*':
return int(op2)*int(op1)
elif i is '/':
return int(op2)/int(op1)
elif i is '+':
return int(op2)+int(op1)
elif i is '-':
return int(op2)-int(op1)
s=evaluate_postfix()
value=s.evalute(expr)
OUTPUT
>>> =================== RESTART ===============================
>>>
Recursion
it is a process in which a function calls itself directly or indirectly.
Advantages of using recursion
A complicated function can be split down into smaller sub-problems utilizing
recursion.
Sequence creation is simpler through recursion than utilizing any nested
iteration.
Recursive functions render the code look simple and effective.
Disadvantages of using recursion
A lot of memory and time is taken through recursive calls which makes it
expensive for use.
Recursive functions are challenging to debug.
The reasoning behind recursion can sometimes be tough to think through.
Syntax:
def func(): <--
|
| (recursive call)
|
func() ----
# Recursive function
def recursive_fibonacci(n):
if n <= 1:
return n
else:
return(recursive_fibonacci(n-1) + recursive_fibonacci(n-2))
n_terms = 10
if n_terms <= 0:
else:
print("Fibonacci series:")
for i in range(n_terms):
print(recursive_fibonacci(i))
# recursively.
# Recursive function
Program to print factorial of a number
# recursively.
# Recursive function
def recursive_factorial(n):
if n == 1:
return n
else:
return n * recursive_factorial(n-1)
# user input
num = 6
# check if the input is valid or not
if num < 0:
print("Invalid input ! Please enter a positive number.")
elif num == 0:
print("Factorial of number 0 is 1")
else:
print("Factorial of number", num, "=", recursive_factorial(num))
What is Tail-Recursion?
A unique type of recursion where the last procedure of a function is a recursive
call. The recursion may be automated away by performing the request in the
current stack frame and returning the output instead of generating a new stack
frame. The tail-recursion may be optimized by the compiler which makes it
better than non-tail recursive functions.
Is it possible to optimize a program by making use of a tail-recursive
function instead of non-tail recursive function?
Considering the function given below in order to calculate the factorial of n, we
can observe that the function looks like a tail-recursive at first but it is a non-tail-
recursive function. If we observe closely, we can see that the value returned by
Recur_facto(n-1) is used in Recur_facto(n), so the call to Recur_facto(n-1) is
not the last thing done by Recur_facto(n).
if (n == 0):
return 1
return n * Recur_facto(n-1)
Tower of Hanoi is a mathematical puzzle where we have three rods and n disks.
The objective of the puzzle is to move the entire stack to another rod, obeying
the following simple rules:
1) Only one disk can be moved at a time.
2) Each move consists of taking the upper disk from one of the stacks and
placing it on top of another stack i.e. a disk can only be moved if it is the
uppermost disk on a stack.
3) No disk may be placed on top of a smaller disk.
Note: Transferring the top n-1 disks from source rod to Auxiliary rod can again
be thought of as a fresh problem and can be solved in the same manner.
Python3
if n==1:
print ("Move disk 1 from source",source,"to destination",destination)
return
# Driver code
n =4
TowerOfHanoi(n,'A','B','C')
Output
Move disk 1 from source A to destination C
Move disk 2 from source A to destination B
Move disk 1 from source C to destination B
Move disk 3 from source A to destination C
Move disk 1 from source B to destination A
Move disk 2 from source B to destination C
Move disk 1 from source A to destination C
Move disk 4 from source A to destination B
Move disk 1 from source C to destination B
Move disk 2 from source C to destination A
Move disk 1 from source B to destination A
Move disk 3 from source C to destination B
Move disk 1 from source A to destination C
Move disk 2 from source A to destination B
Move disk 1 from source C to destination B
Time Complexity: O(2n)
Auxiliary Space: O(n)
the queue is a linear data structure that stores items in a First In First Out
(FIFO) manner. With a queue, the least recently added item is removed first. A
good example of a queue is any queue of consumers for a resource where the
consumer that came first is served first.
Unlike normal queue, it retrieves the highest-priority element instead of the next
element. The priority of individual elements is decided by ordering applied to their keys.
Priority queues are most beneficial to handling scheduling problems where some tasks
will happen based on priorities.
For example - An operating system task is the best example of a priority queue - It
executes the high precedence over lower-priority tasks (downloading updates in the
background). The task scheduler can allow the highest-priority tasks to run first.