0% found this document useful (0 votes)
40 views

How Data Structure Differs/varies From Data Type

Data structures are used to organize and store data efficiently in memory. They differ from data types in that data structures can hold multiple types of data within a single object, while data types can only hold a single value. There are two main classifications of data structures - primitive and non-primitive. Primitive data structures are the basic Python variable types like integers, floats, strings, and booleans. Non-primitive data structures include arrays, lists, and files, which can hold collections of values. Arrays store elements of the same type while lists can store heterogeneous elements.

Uploaded by

ppl.gecbidar
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views

How Data Structure Differs/varies From Data Type

Data structures are used to organize and store data efficiently in memory. They differ from data types in that data structures can hold multiple types of data within a single object, while data types can only hold a single value. There are two main classifications of data structures - primitive and non-primitive. Primitive data structures are the basic Python variable types like integers, floats, strings, and booleans. Non-primitive data structures include arrays, lists, and files, which can hold collections of values. Arrays store elements of the same type while lists can store heterogeneous elements.

Uploaded by

ppl.gecbidar
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 142

Define Data structure?

Data structures are used for the arrangement of data in memory. They are
needed and responsible for organizing, processing, accessing, and storing data
efficiently.

How Data Structure differs/varies from Data Type:


Data Type Data Structure

The data type is the form of a variable Data structure is a collection of


to which a value can be assigned. It different kinds of data. That entire data
defines that the particular variable will can be represented using an object
assign the values of the given data and can be used throughout the
type only. program.

It can hold value but not data. It can hold multiple types of data within
Therefore, it is dataless. a single object.

The implementation of a data type is Data structure implementation is


known as abstract implementation. known as concrete implementation.

There is no time complexity in the In data structure objects, time


case of data types. complexity plays an important role.

While in the case of data structures,


In the case of data types, the value of the data and its value acquire the
data is not stored because it only space in the computer’s main memory.
represents the type of data that can be Also, a data structure can hold
stored. different kinds and types of data within
one single object.

Data type examples are int, float, Data structure examples are stack,
double, etc. queue, tree, etc.
3.Explain Classification of Data Structure:

Primitive Data Structures


These are the most primitive or basic data structures. They are the building blocks for data manipulation
and contain pure, simple values of data. Python has four primitive variable types:

Integers
Float
Strings
Boolean
In the next sections, you'll learn more about them!
Integers
You can use an integer to represent numeric data and, more specifically, whole numbers from negative
infinity to infinity, like 4, 5, or -1.
Float
"Float" stands for 'floating point number'. You can use it for rational numbers, usually ending with a
decimal figure, such as 1.11 or 3.14.
13

String
Strings are collections of alphabets, words, or other characters. In Python, you can create strings by
enclosing a sequence of characters within a pair of single or double quotes. For
example: 'cake', "cookie", etc.

You can also apply the + operations on two or more strings to concatenate them, just like in the example
below:
x = 'Cake'

y = 'Cookie'

x+'&'+y

'Cake & Cookie'


Here are some other basic operations that you can perform with strings; For example, you can use * to
repeat a string a certain number of times:
# Repeat
x = 'Cake'

x*2

'CakeCake'
You can also slice strings, which means that you select parts of strings:
# Range Slicing
x = 'Cake'

z1 = x[2:]

print(z1)
Ke
# Slicing
y = 'Cookie'

z2 = y[0] + y[1]

print(z2)
ke

Co

Note that strings can also be alpha-numeric characters, but that the + operation still is used to concatenate
strings.
x = '4'

y = '2'

x+y
'42'

Python has many built-in methods or helper functions to manipulate strings. Replacing a substring,
capitalising certain words in a paragraph, finding the position of a string within another string are some
common string manipulations. Check out some of these:

Capitalize strings
str.capitalize('cookie')
'Cookie'

Retrieve the length of a string in characters. Note that the spaces also count towards
the final result:
str1 = "Cake 4 U"

str2 = "404"

len(str1)
8

Check whether a string consists of only digits


str1.isdigit()

False

str2.isdigit()

True

Replace parts of strings with other strings


str1.replace('4 U', str2)

'Cake 404'

Find substrings in other strings; Returns the lowest index or position within the string
at which the substring is found:
str1 = 'cookie'

str2 = 'cook'

str1.find(str2)

The substring
'cook'
is found at the start of
'cookie'

. As a result, you refer to the position within


'cookie'

at which you find that substring. In this case,


0

is returned because you start counting positions from 0!


str1 = 'I got you a cookie'

str2 = 'cook'

str1.find(str2)

12

Similarly, the substring


'cook'

is found at position 12 within


'I got you a cookie'

. Remember that you start counting from 0 and that spaces count towards the
positions!
You can find an exhaustive list of string methods in Python here.
Boolean
This built-in data type can take up the values: True and False, which often makes them interchangeable
with the integers 1 and 0. Booleans are useful in conditional and comparison expressions, just like in the
following examples:
x=4

y=2

x == y

False

x>y

True

x=4

y=2
z = (x==y) # Comparison expression (Evaluates to false)

if z: # Conditional on truth/false value of 'z'


print("Cookie")
else: print("No Cookie")
No Cookie

Data Type Conversion


Sometimes, you will find yourself working on someone else's code, and you'll need to convert an integer
to a float or vice versa, for example. Or maybe you find out that you have been using an integer when
what you really need is a float. In such cases, you can convert the data type of variables!
To check the type of an object in Python, use the built-in type() function, just like in the lines of code
below:
i = 4.0

type(i)
float

When you change the type of an entity from one data type to another, this is called "typecasting". There
can be two kinds of data conversions possible: implicit termed as coercion and explicit, often referred to
as casting.
Implicit Data Type Conversion
This is an automatic data conversion, and the compiler handles this for you. Take a look at the following
examples:
# A float
x = 4.0

# An integer
y=2

# Divide `x` by `y`


z = x/y

# Check the type of `z`


type(z)
float

In the example above, you did not have to explicitly change the data type of y to perform float value
division. The compiler did this for you implicitly.
Explicit Data Type Conversion
This type of data type conversion is user-defined, which means you have to explicitly inform the compiler
to change the data type of certain entities. Consider the code chunk below to fully understand this:
x=2

y = "The Godfather: Part "

fav_movie = y + x

TypeError Traceback (most recent call last)

<ipython-input-51-b8fe90df9e0e> in <module>()
1x=2
2 y = "The Godfather: Part "
----> 3 fav_movie = y + x

TypeError: Can't convert 'int' object to str implicitly


The above example gave you an error because the compiler does not understand that you are trying to
perform concatenation or addition because of the mixed data types. You have an integer and a string that
you're trying to add together.
There's an obvious mismatch.
To solve this, you'll first need to convert the int to a string to then be able to perform concatenation.

Note that it might not always be possible to convert a data type to another. Some built-in data conversion
functions that you can use here are: int(), float(), and str().

x=2

y = "The Godfather: Part "

fav_movie = (y) + str(x)

print(fav_movie)
The Godfather: Part 2

Non-Primitive Data Structures


Non-primitive types are the sophisticated members of the data structure family. They don't just store a
value, but rather a collection of values in various formats.
In the traditional computer science world, the non-primitive data structures are divided into:

Arrays
Lists
Files
Array
First off, arrays in Python are a compact way of collecting basic data types, all the entries in an array must
be of the same data type. However, arrays are not all that popular in Python, unlike the other
programming languages such as C++ or Java.
In general, when people talk of arrays in Python, they are actually referring to lists. However, there is a
fundamental difference between them, and you will see this in a bit. For Python, arrays can be seen as a
more efficient way of storing a certain kind of list. This type of list has elements of the same data type,
though.
In Python, arrays are supported by the array module and need to be imported before you start initializing
and using them. The elements stored in an array are constrained in their data type. The data type is
specififed during the array creation and specified using a type code, which is a single character like
the I you see in the example below:

import array as arr


a = arr.array("I",[3,6,9])
type(a)
array.array
List
Lists in Python are used to store collections of heterogeneous items. These are mutable, which means that
you can change their content without changing their identity. You can recognize lists by their square
brackets [ and ] that hold elements separated by a comma ,. Lists are built into Python: you do not need
to invoke them separately.
x = [] # Empty list

type(x)
list

x1 = [1,2,3]

type(x1)
list

x2 = list([1,'apple',3])

type(x2)
list

print(x2[1])
apple
x2[1] = 'orange'

print(x2)
[1, 'orange', 3]
Note: as you have seen in the above example with x1, lists can also hold homogeneous items and hence
satisfy the storage functionality of an array. This is fine unless you want to apply some specific operations
to this collection.
Python provides many methods to manipulate and work with lists. Adding new items to a list, removing
some items from a list, and sorting or reversing a list are common list manipulations. Let's see some of
them in action:

Add 11 to the list_num list with append(). By default, this number will be added to the
end of the list.
list_num = [1,2,45,6,7,2,90,23,435]

list_char = ['c','o','o','k','i','e']

list_num.append(11) # Add 11 to the list, by default adds to the last position

print(list_num)
OpenAI

[1, 2, 45, 6, 7, 2, 90, 23, 435, 11]

Use insert() to insert 11 at index or position 0 in the list_num list


list_num.insert(0, 11)

print(list_num)
[11, 1, 2, 45, 6, 7, 2, 90, 23, 435, 11]

Remove the first occurence of 'o' from list_char with the help of remove()
list_char.remove('o')

print(list_char)
['c', 'o', 'k', 'i', 'e']

Remove the item at index -2 from list_char


list_char.pop(-2) # Removes the item at the specified position

print(list_char)
['c', 'o', 'k', 'e']
list_num.sort() # In-place sorting
print(list_num)
OpenAI

[1, 2, 2, 6, 7, 11, 11, 23, 45, 90, 435]


list.reverse(list_num)
print(list_num)
[435, 90, 45, 23, 11, 11, 7, 6, 2, 2, 1]
.
Stacks
A stack is a container of objects that are inserted and removed according to the Last-In-First-Out (LIFO)
concept. Think of a scenario where at a dinner party where there is a stack of plates, plates are always
added or removed from the top of the pile. In computer science, this concept is used for evaluating
expressions and syntax parsing, scheduling algortihms/routines, etc.
Stacks can be implemented using lists in Python. When you add elements to a stack, it is known as a push
operation, whereas when you remove or delete an element it is called a pop operation. Note that you have
actually have a pop() method at your disposal when you're working with stacks in Python:

# Bottom -> 1 -> 2 -> 3 -> 4 -> 5 (Top)


stack = [1,2,3,4,5]

stack.append(6) # Bottom -> 1 -> 2 -> 3 -> 4 -> 5 -> 6 (Top)

print(stack)
[1, 2, 3, 4, 5, 6]
stack.pop() # Bottom -> 1 -> 2 -> 3 -> 4 -> 5 (Top)

stack.pop() # Bottom -> 1 -> 2 -> 3 -> 4 (Top)

print(stack)
[1, 2, 3, 4]
Queue
A queue is a container of objects that are inserted and removed according to the First-In-First-Out (FIFO)
principle. An excellent example of a queue in the real world is the line at a ticket counter where people
are catered to according to their arrival sequence, and hence the person who arrives first is also the first to
leave. Queues can be of many different kinds.
Lists are not efficient for implementing a queue, because append() and pop() from the end of a list is
not fast and incur a memory movement cost. Also, insertion at the end and deletion from the beginning of
a list is not so fast since it requires a shift in the element positions.
Graphs
A graph in mathematics and computer science are networks consisting of nodes, also called vertices
which may or may not be connected to each other. The lines or the path that connects two nodes is called
an edge. If the edge has a particular direction of flow, then it is a directed graph, with the direction edge
being called an arc. Else if no directions are specified, the graph is called an undirected graph.
This may sound all very theoretical and can get rather complex when you dig deeper. However, graphs
are an important concept, especially in data science, and are often used to model real-life problems. Social
networks, molecular studies in chemistry and biology, maps, and recommender systems all rely on graph
and graph theory principles.
Here, you will find a simple graph implementation using a Python Dictionary to help you get started:
graph = { "a" : ["c", "d"],

"b" : ["d", "e"],


"c" : ["a", "e"],
"d" : ["a", "b"],
"e" : ["b", "c"]
}

def define_edges(graph):
edges = []

for vertices in graph:


for neighbour in graph[vertices]:
edges.append((vertices, neighbour))

return edges

print(define_edges(graph))
[('a', 'c'), ('a', 'd'), ('b', 'd'), ('b', 'e'), ('c', 'a'), ('c', 'e'), ('e', 'b'), ('e', 'c'), ('d', 'a'), ('d', 'b')]
You can do some cool stuff with graphs, such as trying to find if there exists a path between two nodes or
finding the shortest path between two nodes, determining cycles in the graph.
The famous "traveling salesman problem" is, in fact, about finding the shortest possible route that visits
every node exactly once and returns to the starting point. Sometimes the nodes or arcs of a graph have
been assigned weights or costs, you can think of this as assigning difficulty level to walk, and you are
interested in finding the cheapest or the easiest path.
Trees
A tree in the real world is a living being with its roots in the ground and the branches that hold the leaves,
and fruit out in the open. The branches of the tree spread out in a somewhat organized way. In computer
science, trees are used to describe how data is sometimes organized, except that the root is on the top and
the branches, leaves follow, spreading towards the bottom and the tree is drawn inverted compared to the
real tree.
To introduce a little more notation, the root is always at the top of the tree. Keeping the tree metaphor, the
other nodes that follow are called the branches with the final node in each branch being called leaves.
You can imagine each branch as being a smaller tree in itself. The root is often called the parent and the
nodes that it refers to below it called its children. The nodes with the same parent are called siblings. Do
you see why this is also called a family tree?
Trees help in defining real world scenarios and are used everywhere from the gaming world to designing
XML parsers and also the PDF design principle is based on trees. In data science, 'Decision Tree based
Learning' actually forms a large area of research. Numerous famous methods exist like bagging, boosting
use the tree model to generate a predictive model. Games like chess build a huge tree with all possible
moves to analyse and apply heuristics to decide on an optimal move.
You can implement a tree structure using and combining the various data structures you have seen so far
in this tutorial. However, for the sake of simplicity, this topic will be tackled in another post.

class Tree:
def __init__(self, info, left=None, right=None):
self.info = info

self.left = left

self.right = right

def __str__(self):
return (str(self.info) + ', Left child: ' + str(self.left) + ', Right child: ' + str(self.right))

tree = Tree(1, Tree(2, 2.1, 2.2), Tree(3, 3.1))

print(tree)
1, Left child: 2, Left child: 2.1, Right child: 2.2, Right child: 3, Left child: 3.1, Right child: None
You have learned about arrays and also seen the list data structure. However, Python provides many
different types of data collection mechanisms, and although they might not be included in traditional data
structure topics in computer science, they are worth knowing especially with regards to Python
programming language:

Tuples
Dictionary
Sets
Tuples
Tuples are another standard sequence data type. The difference between tuples and list is that tuples are
immutable, which means once defined you cannot delete, add or edit any values inside it. This might be
useful in situations where you might to pass the control to someone else but you do not want them to
manipulate data in your collection, but rather maybe just see them or perform operations separately in a
copy of the data.
Let's see how tuples are implemented:
x_tuple = 1,2,3,4,5

y_tuple = ('c','a','k','e')

x_tuple[0]

1
y_tuple[3]

x_tuple[0] = 0 # Cannot change values inside a tuple

---------------------------------------------------------------------------

TypeError Traceback (most recent call last)

<ipython-input-74-b5d6da8c1297> in <module>()
1 y_tuple[3]
----> 2 x_tuple[0] = 0 # Cannot change values inside a tuple

TypeError: 'tuple' object does not support item assignment


Dictionary
Dictionaries are exactly what you need if you want to implement something similar to a telephone book.
None of the data structures that you have seen before are suitable for a telephone book.
This is when a dictionary can come in handy. Dictionaries are made up of key-value pairs. key is used to
identify the item and the value holds as the name suggests, the value of the item.

x_dict = {'Edward':1, 'Jorge':2, 'Prem':3, 'Joe':4}

del x_dict['Joe']
x_dict

{'Edward': 1, 'Jorge': 2, 'Prem': 3}


x_dict['Edward'] # Prints the value stored with the key 'Edward'.

1
You can apply many other inbuilt functionalies on dictionaries:
len(x_dict)
3
x_dict.keys()

dict_keys(['Prem', 'Edward', 'Jorge'])


x_dict.values()
dict_values([3, 1, 2])

This code shows an example of using a Python dictionary to store and access key-value pairs.
First, the code calls the len() function with x_dict as an argument. This returns the number of key-
value pairs in the dictionary, which is 3.
Next, the code calls the keys() method on x_dict. This returns a view object containing the keys in the
dictionary. In this case, the keys are 'Prem', 'Edward', and 'Jorge', as shown by the output.
Then, the code calls the values() method on x_dict. This returns a view object containing the values
in the dictionary. In this case, the values are 3, 1, and 2, respectively, as shown by the output.
Sets
Sets are a collection of distinct (unique) objects. These are useful to create lists that only hold unique
values in the dataset. It is an unordered collection but a mutable one, this is very helpful when going
through a huge dataset.
x_set = set('CAKE&COKE')

y_set = set('COOKIE')

print(x_set)
{'A', '&', 'O', 'E', 'C', 'K'}
OpenAI

print(y_set) # Single unique 'o'


{'I', 'O', 'E', 'C', 'K'}
print(x - y) # All the elements in x_set but not in y_set
OpenAI

---------------------------------------------------------------------------
NameError Traceback (most recent call last)

<ipython-input-3-31abf5d98454> in <module>()
----> 1 print(x - y) # All the elements in x_set but not in y_set

NameError: name 'x' is not defined


OpenAI

print(x_set|y_set) # Unique elements in x_set or y_set or both


{'C', '&', 'E', 'A', 'O', 'K', 'I'}
print(x_set & y_set) # Elements in both x_set and y_set
{'O', 'E', 'K', 'C'}

The code creates two sets: x_set and y_set. Each set is created by passing a string of characters as an
argument to the set() function. In this case, x_set is created from the string 'CAKE&COKE',
while y_set is created from the string 'COOKIE'.

Next, the code prints each set using the print() function. The first print() statement prints x_set,
which contains the unique characters in the string 'CAKE&COKE'. The output shows
that x_set contains the characters 'A', '&', 'O', 'E', 'C', and 'K'. Similarly, the second print() statement
prints y_set, which contains the unique characters in the string 'COOKIE'. The output shows
that y_set contains the characters 'I', 'O', 'E', 'C', and 'K'.

The third print() statement attempts to print the set of all elements in x_set that are not in y_set.
However, this line of code results in a NameError because the variable x has not been defined.
Presumably, this line should read print(x_set - y_set) instead.

The fourth print() statement prints the set of unique elements in x_set or y_set, or both. The output
shows that the resulting set contains all of the unique characters from both x_set and y_set.

Finally, the fifth print() statement prints the set of elements that are in both x_set and y_set. The
output shows that this set contains the characters 'O', 'E', 'K', and 'C'.
Files
Files are traditionally a part of data structures. And although big data is commonplace in the data science
industry, a programming language without the capability to store and retrieve previously stored
information would hardly be useful. You still have to make use of the all the data sitting in files across
databases and you will learn how to do this.
The syntax to read and write files in Python is similar to other programming languages but a lot easier to
handle. Here are some of the basic functions that will help you to work with files using Python:
open() to open files in your system, the filename is the name of the file to be opened;
read() to read entire files;
readline() to read one line at a time;
write() to write a string to a file, and return the number of characters written; And
close() to close the file.
# File modes (2nd argument): 'r'(read), 'w'(write), 'a'(appending), 'r+'(both reading and writing)
f = open('file_name', 'w')

# Reads entire file


f.read()

# Reads one line at a time


f.readline()

# Writes the string to the file, returning the number of char written
f.write('Add this line.')

f.close()

Explain classification of linear and non linear Data Structure.


Linear data structure: Data structure in which data elements are arranged
sequentially or linearly, where each element is attached to its previous and next
adjacent elements, is called a linear data structure.
Examples of linear data structures are array, stack, queue, linked list, etc.
Static data structure: Static data structure has a fixed memory size. It is easier
to access the elements in a static data structure.
An example of this data structure is an array.
Dynamic data structure: In the dynamic data structure, the size is not fixed. It
can be randomly updated during the runtime which may be considered efficient
concerning the memory (space) complexity of the code.
Examples of this data structure are queue, stack, etc.
Non-linear data structure: Data structures where data elements are not
placed sequentially or linearly are called non-linear data structures. In a non-
linear data structure, we can’t traverse all the elements in a single run only.
Examples of non-linear data structures are trees and graphs.
Need Of Data structure :

The structure of the data and the synthesis of the algorithm are relative to each
other. Data presentation must be easy to understand so the developer, as well
as the user, can make an efficient implementation of the operation.
Data structures provide an easy way of organizing, retrieving, managing, and
storing data.
Here is a list of the needs for data.
Data structure modification is easy.
It requires less time.
Save storage memory space.
Data representation is easy.
Easy access to the large database.

Arrays:

An array is a linear data structure and it is a collection of items stored at


contiguous memory locations. The idea is to store multiple items of the same
type together in one place. It allows the processing of a large amount of data in
a relatively short period. The first element of the array is indexed by a subscript
of 0. There are different operations possible in an array, like Searching, Sorting,
Inserting, Traversing, Reversing, and Deleting.

Characteristics of an Array:
An array has various characteristics which are as follows:
Arrays use an index-based data structure which helps to identify each of the
elements in an array easily using the index.
If a user wants to store multiple values of the same data type, then the array
can be utilized efficiently.
An array can also handle complex data structures by storing data in a two-
dimensional array.
An array is also used to implement other data structures like Stacks, Queues,
Heaps, Hash tables, etc.
The search process in an array can be done very easily.
Operations performed on array:

Initialization: An array can be initialized with values at the time of declaration


or later using an assignment statement.
Accessing elements: Elements in an array can be accessed by their index,
which starts from 0 and goes up to the size of the array minus one.
Searching for elements: Arrays can be searched for a specific element using
linear search or binary search algorithms.
Sorting elements: Elements in an array can be sorted in ascending or
descending order using algorithms like bubble sort, insertion sort, or quick sort.
Inserting elements: Elements can be inserted into an array at a specific
location, but this operation can be time-consuming because it requires shifting
existing elements in the array.
Deleting elements: Elements can be deleted from an array by shifting the
elements that come after it to fill the gap.
Updating elements: Elements in an array can be updated or modified by
assigning a new value to a specific index.
Traversing elements: The elements in an array can be traversed in order,
visiting each element once.
These are some of the most common operations performed on arrays. The
specific operations and algorithms used may vary based on the requirements of
the problem and the programming language used.
Applications of Array:
Different applications of an array are as follows:
An array is used in solving matrix problems.
Database records are also implemented by an array.
It helps in implementing a sorting algorithm.
It is also used to implement other data structures like Stacks, Queues, Heaps,
Hash tables, etc.
An array can be used for CPU scheduling.
Can be applied as a lookup table in computers.
Arrays can be used in speech processing where every speech signal is an
array.
The screen of the computer is also displayed by an array. Here we use a
multidimensional array.
The array is used in many management systems like a library, students,
parliament, etc.
The array is used in the online ticket booking system. Contacts on a cell phone
are displayed by this array.
In games like online chess, where the player can store his past moves as well
as current moves. It indicates a hint of position.
To save images in a specific dimension in the android Like 360*1200
Real-Life Applications of Array:
An array is frequently used to store data for mathematical computations.
It is used in image processing.
It is also used in record management.
Book pages are also real-life examples of an array.
It is used in ordering boxes as well.
Want to get started with arrays? You can try out our curated articles and lists for
the best practice:
Explain Linked list in detail.:

A linked list is a linear data structure in which elements are not stored at
contiguous memory locations. The elements in a linked list are linked using
pointers as shown in the below image:
Types of linked lists:
Singly-linked list
Doubly linked list
Circular linked list
Doubly circular linked list

Linked List

Characteristics of a Linked list:


A linked list uses extra memory to store links.
During the initialization of the linked list, there is no need to know the size of the
elements.
Linked lists are used to implement stacks, queues, graphs, etc.
The first node of the linked list is called the Head.
The next pointer of the last node always points to NULL.
In a linked list, insertion and deletion are possible easily.
Each node of the linked list consists of a pointer/link which is the address of the
next node.
Linked lists can shrink or grow at any point in time easily.
Operations performed on Linked list:
A linked list is a linear data structure where each node contains a value and a
reference to the next node. Here are some common operations performed on
linked lists:
Initialization: A linked list can be initialized by creating a head node with a
reference to the first node. Each subsequent node contains a value and a
reference to the next node.
Inserting elements: Elements can be inserted at the head, tail, or at a specific
position in the linked list.
Deleting elements: Elements can be deleted from the linked list by updating
the reference of the previous node to point to the next node, effectively
removing the current node from the list.
Searching for elements: Linked lists can be searched for a specific element by
starting from the head node and following the references to the next nodes until
the desired element is found.
Updating elements: Elements in a linked list can be updated by modifying the
value of a specific node.
Traversing elements: The elements in a linked list can be traversed by starting
from the head node and following the references to the next nodes until the end
of the list is reached.
Reversing a linked list: The linked list can be reversed by updating the
references of each node so that they point to the previous node instead of the
next node.
These are some of the most common operations performed on linked lists. The
specific operations and algorithms used may vary based on the requirements of
the problem and the programming language used.
Applications of the Linked list:
Different applications of linked lists are as follows:
Linked lists are used to implement stacks, queues, graphs, etc.
Linked lists are used to perform arithmetic operations on long integers.
It is used for the representation of sparse matrices.
It is used in the linked allocation of files.
It helps in memory management.
It is used in the representation of Polynomial Manipulation where each
polynomial term represents a node in the linked list.
Linked lists are used to display image containers. Users can visit past, current,
and next images.
They are used to store the history of the visited page.
They are used to perform undo operations.
Linked are used in software development where they indicate the correct syntax
of a tag.
Linked lists are used to display social media feeds.
Real-Life Applications of a Linked list:
A linked list is used in Round-Robin scheduling to keep track of the turn in
multiplayer games.
It is used in image viewer. The previous and next images are linked, and hence
can be accessed by the previous and next buttons.
In a music playlist, songs are linked to the previous and next songs.
Stack:

Stack is a linear data structure that follows a particular order in which the
operations are performed. The order is LIFO(Last in first out). Entering and
retrieving data is possible from only one end. The entering and retrieving of
data is also called push and pop operation in a stack. There are different
operations possible in a stack like reversing a stack using recursion, Sorting,
Deleting the middle element of a stack, etc.

Stack

Characteristics of a Stack:
Stack has various different characteristics which are as follows:
Stack is used in many different algorithms like Tower of Hanoi, tree traversal,
recursion, etc.
Stack is implemented through an array or linked list.
It follows the Last In First Out operation i.e., an element that is inserted first will
pop in last and vice versa.
The insertion and deletion are performed at one end i.e. from the top of the
stack.
In stack, if the allocated space for the stack is full, and still anyone attempts to
add more elements, it will lead to stack overflow.
Applications of Stack:
Different applications of Stack are as follows:
The stack data structure is used in the evaluation and conversion of arithmetic
expressions.
It is used for parenthesis checking.
While reversing a string, the stack is used as well.
Stack is used in memory management.
It is also used for processing function calls.
The stack is used to convert expressions from infix to postfix.
The stack is used to perform undo as well as redo operations in word
processors.
The stack is used in virtual machines like JVM.
The stack is used in the media players. Useful to play the next and previous
song.
The stack is used in recursion operations.
Operation performed on stack ;

A stack is a linear data structure that implements the Last-In-First-Out (LIFO)


principle. Here are some common operations performed on stacks:
Push: Elements can be pushed onto the top of the stack, adding a new element
to the top of the stack.
Pop: The top element can be removed from the stack by performing a pop
operation, effectively removing the last element that was pushed onto the stack.
Peek: The top element can be inspected without removing it from the stack
using a peek operation.
IsEmpty: A check can be made to determine if the stack is empty.
Size: The number of elements in the stack can be determined using a size
operation.
These are some of the most common operations performed on stacks. The
specific operations and algorithms used may vary based on the requirements of
the problem and the programming language used. Stacks are commonly used
in applications such as evaluating expressions, implementing function call
stacks in computer programs, and many others.
Real-Life Applications of Stack:
Real life example of a stack is the layer of eating plates arranged one above the
other. When you remove a plate from the pile, you can take the plate to the top
of the pile. But this is exactly the plate that was added most recently to the pile.
If you want the plate at the bottom of the pile, you must remove all the plates on
top of it to reach it.
Browsers use stack data structures to keep track of previously visited sites.
Call log in mobile also uses stack data structure.
Explain Queue:
Queue is a linear data structure that follows a particular order in which the
operations are performed. The order is First In First Out(FIFO) i.e. the data item
stored first will be accessed first. In this, entering and retrieving data is not done
from only one end. An example of a queue is any queue of consumers for a
resource where the consumer that came first is served first. Different operations
are performed on a Queue like Reversing a Queue (with or without using
recursion), Reversing the first K elements of a Queue, etc. A few basic
operations performed In Queue are enqueue, dequeue, front, rear, etc.

Queue

Characteristics of a Queue:
The queue has various different characteristics which are as follows:
The queue is a FIFO (First In First Out) structure.
To remove the last element of the Queue, all the elements inserted before the
new element in the queue must be removed.
A queue is an ordered list of elements of similar data types.
Applications of Queue:
Different applications of Queue are as follows:
Queue is used for handling website traffic.
It helps to maintain the playlist in media players.
Queue is used in operating systems for handling interrupts.
It helps in serving requests on a single shared resource, like a printer, CPU task
scheduling, etc.
It is used in the asynchronous transfer of data e.g. pipes, file IO, and sockets.
Queues are used for job scheduling in the operating system.
In social media to upload multiple photos or videos queue is used.
To send an e-mail queue data structure is used.
To handle website traffic at a time queues are used.
In Windows operating system, to switch multiple applications.
Operation performed on queue:

A queue is a linear data structure that implements the First-In-First-Out (FIFO)


principle. Here are some common operations performed on queues:
Enqueue: Elements can be added to the back of the queue, adding a new
element to the end of the queue.
Dequeue: The front element can be removed from the queue by performing a
dequeue operation, effectively removing the first element that was added to the
queue.
Peek: The front element can be inspected without removing it from the queue
using a peek operation.
IsEmpty: A check can be made to determine if the queue is empty.
Size: The number of elements in the queue can be determined using a size
operation.
These are some of the most common operations performed on queues. The
specific operations and algorithms used may vary based on the requirements of
the problem and the programming language used. Queues are commonly used
in applications such as scheduling tasks, managing communication between
processes, and many others.
Real-Life Applications of Queue:
A real-world example of a queue is a single-lane one-way road, where the
vehicle that enters first will exit first.
A more real-world example can be seen in the queue at the ticket windows.
A cashier line in a store is also an example of a queue.
People on an escalator
Want to get started with Queue? You can try out our curated articles and lists
for the best practice:
Tree:

A tree is a non-linear and hierarchical data structure where the elements are
arranged in a tree-like structure. In a tree, the topmost node is called the root
node. Each node contains some data, and data can be of any type. It consists
of a central node, structural nodes, and sub-nodes which are connected via
edges. Different tree data structures allow quicker and easier access to the data
as it is a non-linear data structure. A tree has various terminologies like Node,
Root, Edge, Height of a tree, Degree of a tree, etc.
There are different types of Tree-like
Binary Tree,
Binary Search Tree,
AVL Tree,
B-Tree, etc.
Tree

Characteristics of a Tree:
The tree has various different characteristics which are as follows:
A tree is also known as a Recursive data structure.
In a tree, the Height of the root can be defined as the longest path from the root
node to the leaf node.
In a tree, one can also calculate the depth from the top to any node. The root
node has a depth of 0.
Applications of Tree:
Different applications of Tree are as follows:
Heap is a tree data structure that is implemented using arrays and used to
implement priority queues.
B-Tree and B+ Tree are used to implement indexing in databases.
Syntax Tree helps in scanning, parsing, generation of code, and evaluation of
arithmetic expressions in Compiler design.
K-D Tree is a space partitioning tree used to organize points in K-dimensional
space.
Spanning trees are used in routers in computer networks.
Operation performed on tree:
A tree is a non-linear data structure that consists of nodes connected by edges.
Here are some common operations performed on trees:
Insertion: New nodes can be added to the tree to create a new branch or to
increase the height of the tree.
Deletion: Nodes can be removed from the tree by updating the references of
the parent node to remove the reference to the current node.
Search: Elements can be searched for in a tree by starting from the root node
and traversing the tree based on the value of the current node until the desired
node is found.
Traversal: The elements in a tree can be traversed in several different ways,
including in-order, pre-order, and post-order traversal.
Height: The height of the tree can be determined by counting the number of
edges from the root node to the furthest leaf node.
Depth: The depth of a node can be determined by counting the number of
edges from the root node to the current node.
Balancing: The tree can be balanced to ensure that the height of the tree is
minimized and the distribution of nodes is as even as possible.
These are some of the most common operations performed on trees. The
specific operations and algorithms used may vary based on the requirements of
the problem and the programming language used. Trees are commonly used in
applications such as searching, sorting, and storing hierarchical data.
Real-Life Applications of Tree:
In real life, tree data structure helps in Game Development.
It also helps in indexing in databases.
A Decision Tree is an efficient machine-learning tool, commonly used in
decision analysis. It has a flowchart-like structure that helps to understand data.
Domain Name Server also uses a tree data structure.
The most common use case of a tree is any social networking site.
Want to get started with Tree? You can try out our curated articles and lists for
the best practice:
Explain Graph:
A graph is a non-linear data structure that consists of vertices (or nodes) and
edges. It consists of a finite set of vertices and set of edges that connect a pair
of nodes. The graph is used to solve the most challenging and complex
programming problems. It has different terminologies which are Path, Degree,
Adjacent vertices, Connected components, etc.

Graph

Characteristics of Graph:
The graph has various different characteristics which are as follows:
The maximum distance from a vertex to all the other vertices is considered the
Eccentricity of that vertex.
The vertex having minimum Eccentricity is considered the central point of the
graph.
The minimum value of Eccentricity from all vertices is considered the radius of a
connected graph.
Applications of Graph:
Different applications of Graphs are as follows:
The graph is used to represent the flow of computation.
It is used in modeling graphs.
The operating system uses Resource Allocation Graph.
Also used in the World Wide Web where the web pages represent the nodes.
Operation performed on Graph:
A graph is a non-linear data structure consisting of nodes and edges. Here are
some common operations performed on graphs:
Add Vertex: New vertices can be added to the graph to represent a new node.
Add Edge: Edges can be added between vertices to represent a relationship
between nodes.
Remove Vertex: Vertices can be removed from the graph by updating the
references of adjacent vertices to remove the reference to the current vertex.
Remove Edge: Edges can be removed by updating the references of the
adjacent vertices to remove the reference to the current edge.

Depth-First Search (DFS): A graph can be traversed using a depth-first search


by visiting the vertices in a depth-first manner.

Breadth-First Search (BFS): A graph can be traversed using a breadth-first


search by visiting the vertices in a breadth-first manner.

Shortest Path: The shortest path between two vertices can be determined
using algorithms such as Dijkstra’s algorithm or A* algorithm.

Connected Components: The connected components of a graph can be


determined by finding sets of vertices that are connected to each other but not
to any other vertices in the graph.

Cycle Detection: Cycles in a graph can be detected by checking for back


edges during a depth-first search.
These are some of the most common operations performed on graphs. The
specific operations and algorithms used may vary based on the requirements of
the problem and the programming language used. Graphs are commonly used
in applications such as computer networks, social networks, and routing
problems.
Real-Life Applications of Graph:
One of the most common real-world examples of a graph is Google Maps
where cities are located as vertices and paths connecting those vertices are
located as edges of the graph.
A social network is also one real-world example of a graph where every person
on the network is a node, and all of their friendships on the network are the
edges of the graph.
A graph is also used to study molecules in physics and chemistry.
Want to get started with Graph? You can try out our curated articles and lists for
the best practice:
Advantages of data structure:

Improved data organization and storage efficiency.


Faster data retrieval and manipulation.
Facilitates the design of algorithms for solving complex problems.
Eases the task of updating and maintaining the data.
Provides a better understanding of the relationships between data elements.

Disadvantage of Data Structure:

Increased computational and memory overhead.


Difficulty in designing and implementing complex data structures.
Limited scalability and flexibility.
Complexity in debugging and testing.
Difficulty in modifying existing data structures.
Introduction, Abstractions, Abstract Data Types,

Abstract Data type (ADT) is a type (or class) for objects whose behavior is
defined by a set of values and a set of operations. The definition of ADT only
mentions what operations are to be performed but not how these operations will
be implemented. It does not specify how data will be organized in memory and
what algorithms will be used for implementing the operations. It is called
“abstract” because it gives an implementation-independent view.
Features of ADT:
Abstract data types (ADTs) are a way of encapsulating data and
operations on that data into a single unit. Some of the key features of
ADTs include:
 Abstraction: The user does not need to know the implementation of the
data structure only essentials are provided.
 Better Conceptualization: ADT gives us a better conceptualization of the
real world.
 Robust: The program is robust and has the ability to catch errors.
 Encapsulation: ADTs hide the internal details of the data and provide a
public interface for users to interact with the data. This allows for easier
maintenance and modification of the data structure.
 Data Abstraction: ADTs provide a level of abstraction from the
implementation details of the data. Users only need to know the operations
that can be performed on the data, not how those operations are
implemented.
 Data Structure Independence: ADTs can be implemented using different
data structures, such as arrays or linked lists, without affecting the
functionality of the ADT.
 Information Hiding: ADTs can protect the integrity of the data by allowing
access only to authorized users and operations. This helps prevent errors
and misuse of the data.
 Modularity: ADTs can be combined with other ADTs to form larger, more
complex data structures. This allows for greater flexibility and modularity in
programming.
Advantages:
 Encapsulation: ADTs provide a way to encapsulate data and operations
into a single unit, making it easier to manage and modify the data structure.
 Abstraction: ADTs allow users to work with data structures without having
to know the implementation details, which can simplify programming and
reduce errors.
 Data Structure Independence: ADTs can be implemented using different
data structures, which can make it easier to adapt to changing needs and
requirements.
 Information Hiding: ADTs can protect the integrity of data by controlling
access and preventing unauthorized modifications.
 Modularity: ADTs can be combined with other ADTs to form more complex
data structures, which can increase flexibility and modularity in
programming.
Disadvantages:
 Overhead: Implementing ADTs can add overhead in terms of memory and
processing, which can affect performance.
 Complexity: ADTs can be complex to implement, especially for large and
complex data structures.
 Learning Curve: Using ADTs requires knowledge of their implementation
and usage, which can take time and effort to learn.
 Limited Flexibility: Some ADTs may be limited in their functionality or may
not be suitable for all types of data structures.
 Cost: Implementing ADTs may require additional resources and investment,
which can increase the cost of development.

Algorithm Analysis – Space Complexity,Time Complexity.Run time analysis.

Algorithms are a set of instructions that helps us to get the expected output.
To judge the efficiency of an algorithm, we need an analyzing tool.

There are two techniques to measure the efficiency of an algorithm.

 Time Complexity: Time complexity is the time taken by an algorithm to execute.


 Space Complexity: Space complexity is the amount of memory used by an algorithm
while executing.
The analysis of algorithms is to find the complexity of algorithms in terms
of time and storage. In theoretical analysis, the analysis is performed based on the
description of the algorithm, or the operation of the program or function.

The theoretical analysis takes all the possible input into consideration and
assumes that the time taken for executing the basic operation is constant.

The primitive or basic operations that take constant time are given as
follows.

 Declarations
 Assignments
 Arithmetic Operations
 Comparison statements
 Calling functions
 Return statement

Let us consider the following code.

i =0 #declaration statement
count = 0 #declaration statement
while(i < N): #comparison statement
count +=1 #arithmetic operation

Operations Frequency

Declaration statement 2

Comparison statement N

Arithmetic operations N
 Since the arithmetic operation is within the while loop, it will be executed
for N number of times. Now the total operations performed is 2 + N + N = 2
+ 2N. When the value of N is large, then the constant values don’t make any
difference and are insignificant. Hence, we ignore the constant values.

Best, Average, and Worst cases

An algorithm can perform differently based on the given input. There are
three cases to analyze an algorithm. They are worst case, average case, and the
best case.

Let us consider the linear search algorithm. In linear search algorithm, we


search for an element sequentially. In the following example, we perform linear
search on the list list_1 = [4, 6, 7, 1, 5, 2].

def linear_search(list_1, key):


for i in range(0, len(list_1)):
if key == list_1[i]:
print(key, "is at index", i)

list_1 = [4, 6, 7, 1, 5, 2]
linear_search(list_1, 6)

#Output: 6 is at index 1
Best case: The best case is the minimum time required by an algorithm for
the execution of a program. If we are searching for an element in a list, and it is
present at the 0th index, then the number of comparisons to be performed is 1 time.
The time complexity will be O(1) i.e. constant which is the best case time.

Worst case: The worst case is the maximum time required by an algorithm
for the execution of a program. If we are searching for an element in the list of
length “n”, and it’s not present in the list or is present at the ending index, then the
number of comparisons to be performed will be “n” times.

Therefore, the time complexity will be O(n), which is the worst case time.

Average case: In average case analysis, we take the average time of all the
possible inputs. The average case time is given by,
Average case time = All possible cases time / Number of cases

In case, if we are searching for an element, and it is present at the first


location, then it will take 1 unit of time. If we are searching for an element, and it
is present at the second location, then it will take 2 units of time. Similarly, if we
are searching for an element present at the nth location, then it will take
the “n” units of time.

The total number of possible cases will be “n”. Mathematically, we can


write
Average time = (1 + 2 + 3... + n)/ n
= (n(n + 1))/(2n)
= (n + 1)/2
Assuming "n" is very large, we ignore the constant terms.
Thus, the average case time can be written as
Average time = n
The best, worst and average case time for linear search algorithm is
Best case B(n) = O(1)
Worst case W(n) = O(n)
Average case A(n) = O(n)

Big O, Omega, and Theta Notations

Big O, Omega, and theta notations are also called asymptotic


notations. The asymptotic analysis uses mathematical tools to measure the
efficiency of an algorithm.

For example, consider the time complexity of an algorithm is T(n) = n2 + 2n


+ 5. For large values of n, the part 2n + 5 is insignificant compared to the n2 part.

In asymptotic notations, we should be just concerned about how the function


grows as the input(n) will grow, and it completely depends on n2 for the time T(n).
Therefore, as per the asymptotic analysis, we ignore the insignificant part and
constants of an expression.
Big O

The big O notation measures the upper bound on the running time of an
algorithm. Big O time complexity describes the worst case. Consider a
function f(n). We choose another function g(n) such that f(n) <= c.g(n), for n >
n0 and c > 0. Here, c and n0 represent constant values. If the equation is
satisfied, f(n) = O(g(n)).
Big Omega

The big Omega measures the lower bound on the running time of an
algorithm. Big Omega time complexity describes the best case. Consider a
function f(n). We choose another function g(n) such that c.g(n) <= f(n), for n >
n0 and c > 0. Here, c and n0 represent constant. If the equation is satisfied, f(n) =
Omega(g(n)).

Big Theta

The big Theta measures the time between the upper and the lower bound of
an algorithm. Big Theta describes the time complexity within the bounds of the
best and worst case.

Consider a function f(n). We choose another function g(n) such that c1.g(n)
<= f(n) <= c2.g(n), for n > n0 and c1, c2 > 0. Here, c1, c2, and n0 represent
constant values. If the equation is satisfied, f(n) = Theta(g(n)).

Constant O(1)

Logarithm O(log N)

Linear O(N)

Quasilinear O(N log N)

Quadratic O(N2)

Cubic O(N3)

Exponential O(2N)
We can analyze the efficiency of an algorithm based on its performance as
the size of input grows. The time complexity of an algorithm is commonly
expressed in Big O notation

We use the worst-case time complexity because it ensures that the running
time of an algorithm will not exceed the worst-case time. The performance
classification is given as follows.

Graphical representation of different Time Complexities

Time Complexity Examples


O(1) – Constant Time Complexity

For the constant time complexity, the running time of an algorithm doesn’t
change and remains constant irrespective of the size of the input data. Consider a
list list_1 = [4, 6, 7, 1, 5, 2]. Now, accessing a specific element using indexing
takes a constant amount of time.
list_1 = [4, 6, 7, 1, 5, 2]
print(list_1[4]) #accessing element at the 4th index.
#Output: 5
O(log N) – Logarithm Time Complexity

In logarithmic time complexity, the running time of an algorithm is


proportional to the logarithm of input size. For example, the binary search
algorithm takes log(N) time.

Consider a sorted list list_1 = [1, 2, 3, 6, 7]. We use binary search to find the
key element 6. The binary search algorithm divides the input size in half in each
iteration. Therefore, the time complexity of the algorithm reduces to O(log N).
def binary_search(list_1, low, high, key):
while(low <= high):
mid = low + (high - low)//2
if list_1[mid] == key:
return mid
break
elif list_1[mid] < key:
low = mid + 1
else:
high = mid - 1
return "Not Found"

list_1 = [1, 2, 3, 6, 7]
low = 0
high = len(list_1) - 1
key = 6
ind = binary_search(list_1, low, high, key)
print("Element 6 is present at index: ", ind)
#Output: Element 6 is present at index: 3
O(N) – Linear Time Complexity

In linear time complexity, the running time of an algorithm increases


linearly with an increase in the size of the input data. For example, adding all the
elements present in a list is proportional to the length of the list.
Consider a list list_1 = [4, 6, 7, 1, 5, 2]. In the following code, we initialize a
variable add with 0. A for loop iterates over the list_1 and add all the elements to
the variable add.
add = 0
list_1 = [4, 6, 7, 1, 5, 2]
for i in range(0, len(list_1)):
add += list_1[i]
print(add)
#Output: 25
O(N2) – Quadratic Time Complexity

In quadratic time complexity, the running time of an algorithm is


proportional to the square of the input size. For example, consider a matrix of size
N x N. Summation of all the elements of the matrix will take O(N2) time.

Consider a matrix mat_1 = [[1, 2, 3], [1, 1, 1], [5, 7, 8]]. We initialize a
variable add with 0 and use a nested for loop to traverse each element of the
matrix. In each iteration, we add the element to the variable add.
mat_1 = [[1, 2, 3], [1, 1, 1], [5, 7, 8]]
add = 0
for i in range(len(mat_1)):
for j in range(len(mat_1[0])):
add += mat_1[i][j]
print(add)
#Ouput: 29
O(2N) – Exponential Time Complexity

In exponential time complexity, the running time of an algorithm doubles


with the addition of each input data. For example, getting the Fibonacci number
using recursion takes exponential time.
def fib(n):
if n < 0 or int(n) != n:
return "Not defined"
elif n == 0 or n == 1 :
return n
else:
return fib(n-1) + fib(n-2)
print(fib(4))#prints Fibonacci of 4th number in series
#Output: 3
Space Complexity

Space complexity is the memory consumed by an algorithm while


executing. We can express the space complexity in terms of big O notation such as
O(1), O(N), O(N log N), O(N2), etc.

For example, the space complexity for a list of length N will be O(N) and
the space complexity for a matrix of size N x N will be O(N2).

In recursive algorithms, the stack space is also taken into account. For
example, consider the following program. In each iteration, we call the
function my_func() recursively, and each recursive call adds a layer to the stack
memory. Hence, the space complexity is O(N).
def my_ func(num):
if num > 0:
my_func(num - 1)
print(num, end = " ")

my_func(5)

However, N calls don’t mean that the space complexity will be O(N). Let us
consider two functions, func_1 and func_2. In the function func_1, we call the
function func_2 for N number of times. But, all these calls will not be added to the
stack memory simultaneously. Thus, we need only O(1) space.
def func_1(n):
for i in range(n):
add = func_2(i)
print(add)

def func_2(x):
return x + 1
Adding vs Multiplying Time Complexities

Let us look at the following example. There are two sets of for loops(loop A
and loop B) executing for a range of M and N respectively.

In Example 1, loop B starts executing after the execution of loop A is


completed. Hence, the time complexity will be O(M + N). In Example 2, for each
iteration of loop A, loop B is executed. Hence, the time complexity will be O(M *
N).
#Example 1
for i in range(M): #loop A
print(i)
for j in range(N): #loop B
print(j)

#Example 2
for i in range(M): # loop A
for j in range(N): #loop B
print(i, j)
Time Complexity of Recursive Calls

Consider the following example. We declare a user-defined


function my_func() that takes a number “n” as a parameter and calls itself
recursively.
def my_func(n): ----------->T(n)
if n > 0: ----------->O(1)
print(n) ----------->O(1)
my_func(n-1) ----------->T(n-1)

my_func(5)

Let us assume that the total time taken by my_func() is T(n). Hence, we can
say that the recursive statement my_func(n-1) will take T(n-1) time. The other
basic operations and statements take 1 unit of time.
Now, we can write
T(n) = O(1) + O(1) + T(n-1)
T(n) = T(n-1) + O(1)
#Taking constant time as 1
T(n) = T(n-1) + 1

The statements inside the if block will be executed, if n>0, and the time will
be T(n-1) +1. If n=0, then only the conditional statement will be tested and the time
will be 1 unit. Thus, the recurrence relation for T(n) is given as
T(n) = T(n-1) + 1, if n > 0
= 1 , if n = 0

Now, T(n) = T(n-1) +1 -----------------(1)


Similarly, we can write
T(n-1) = T(n-2) + 1 -----------------(2)
T(n-2) = T(n-3) + 1 -----------------(3)

Substituting (2) in (1), we get


T(n) = T(n-2) + 2 ------------------(4)

Substituting (3) in (4), we get


T(n) = T(n-3) + 3 ------------------(5)

If we continue this for k times, then


T(n) = T(n-k) + k -----------------(6)

We know that T(0) = 1,


so, we assume n - k = 0
Thus, k = n
By substituting the value of k in (6) we get
T(n) = T(n-n) + n
T(n) = T(0) + n
T(n) = 1 + n

Hence, we can say T(n) will be taking O(n) time


Measuring the Recursive Algorithm that takes Multiple Calls

Consider the example given below. We define a function f(n) that takes a
number n as input and calls the function recursively 2 times.
def f(n):
if n <= 1:
return 1
return f(n-1) + f(n-2)

f(4)

Consider each recursive call as a node of a binary tree.


 At level 0 we have 1 node, this can be expressed as 20.
 At level 1 we have 2 nodes, this can be expressed as 21.
 At level 2 we have 4 nodes, this can be expressed as 22.
 At level 3 we have 8 nodes, this can be expressed as 23.

We know that, 20 + 21 + 22 + 23 … + 2n = 2(n+1) -1. Hence, the time complexity


will be 2n ignoring the insignificant and constant terms.

When multiple recursive calls are made, we can represent time complexity
as O(branchesdepth). Here, branches represents number of children for each node i.e.
number of recursive call in each iteration and depth represents parameter in the
recursive function.

Algorithm design strategies:

The classification of algorithms is important for several reasons:


Organization: Algorithms can be very complex and by classifying them, it becomes
easier to organize, understand, and compare different algorithms.
The algorithms can be classified in various ways. They are:

1. Implementation Method
2. Design Method
3. Design Approaches
4. Other Classifications

The classification of algorithms is important for several reasons:

 Problem Solving: Different problems require different algorithms, and by


having a classification, it can help identify the best algorithm for a
particular problem.
 Performance Comparison: By classifying algorithms, it is possible to
compare their performance in terms of time and space complexity,
making it easier to choose the best algorithm for a particular use case.
 Reusability: By classifying algorithms, it becomes easier to re-use
existing algorithms for similar problems, thereby reducing development
time and improving efficiency.
 Research: Classifying algorithms is essential for research and
development in computer science, as it helps to identify new algorithms
and improve existing ones.
 Overall, the classification of algorithms plays a crucial role in computer
science and helps to improve the efficiency and effectiveness of solving
problems.

Brute force methodology :


1. Repeat through step 4 while (p ≤ n-1)
2. set E ← 0 ➤ Initializing exchange variable.

Step 3 ➤ comparison, loop.

1. Repeat for i ← 1, 1, …... l-1.


2. if (A [i] > A [i + 1]) then
3. set A [i] ↔ A [i + 1] ➤ Exchanging values.
4. Set E ← E + 1

Step 4 ➤ Finish, or reduce the size.

1. if (E = 0) then
2. exit
3. else
4. set l ← l - 1.

How Bubble Sort Works


1. The bubble sort starts with the very first index and makes it a bubble element.
Then it compares the bubble element, which is currently our first index element,
with the next element. If the bubble element is greater and the second element is
smaller, then both of them will swap.
After swapping, the second element will become the bubble element. Now we
will compare the second element with the third as we did in the earlier step and
swap them if required. The same process is followed until the last element.
2. We will follow the same process for the rest of the iterations. After each of the
iteration, we will notice that the largest element present in the unsorted array has
reached the last index.

For each iteration, the bubble sort will compare up to the last unsorted element.

Once all the elements get sorted in the ascending order, the algorithm will get
terminated.

Consider the following example of an unsorted array that we will sort with the help of
the Bubble Sort algorithm.

Initially,
Example2

o Compare a2 and a3

As a2 < a3 so the array will remain as it is.

o Compare a3 and a4
Here a3 > a4, so we will again swap both of them.

Pass 2:

o Compare a0 and a1

As a0 < a1 so the array will remain as it is.

o Compare a1 and a2

Here a1 < a2, so the array will remain as it is.

o Compare a2 and a3
In this case, a2 > a3, so both of them will get swapped.

Pass 3:

o Compare a0 and a1

As a0 < a1 so the array will remain as it is.

o Compare a1 and a2

Now a1 > a2, so both of them will get swapped.


Pass 4:

o Compare a0 and a1

Here a0 > a1, so we will swap both of them.

Hence the array is sorted as no more swapping is required.

Complexity Analysis of Bubble Sort


Input: Given n input elements.

Output: Number of steps incurred to sort a list.

Logic: If we are given n elements, then in the first pass, it will do n-1 comparisons; in the
second pass, it will do n-2; in the third pass, it will do n-3 and so on. Thus, the total
number of comparisons can be found by;
Therefore, the bubble sort algorithm encompasses a time complexity of O(n2) and a
space complexity of O(1) because it necessitates some extra memory space for temp
variable for swapping.

Time Complexities:
o Best Case Complexity: The bubble sort algorithm has a best-case time complexity
of O(n) for the already sorted array.
o Average Case Complexity: The average-case time complexity for the bubble sort
algorithm is O(n2), which happens when 2 or more elements are in jumbled, i.e., neither
in the ascending order nor in the descending order.
o Worst Case Complexity: The worst-case time complexity is also O(n2), which occurs
when we sort the descending order of an array into the ascending order.

Advantages of Bubble Sort


1. Easily understandable.
2. Does not necessitates any extra memory.
3. The code can be written easily for this algorithm.
4. Minimal space requirement than that of other sorting algorithms.

ADVERTISEMENT

Disadvantages of Bubble Sort


1. It does not work well when we have large unsorted lists, and it necessitates more
resources that end up taking so much of time.
2. It is only meant for academic purposes, not for practical implementations.
3. It involves the n2 order of steps to sort an algorithm.

Selection Sort
The selection sort enhances the bubble sort by making only a single swap for each pass
through the rundown. In order to do this, a selection sort searches for the biggest value
as it makes a pass and, after finishing the pass, places it in the best possible area.
Similarly, as with a bubble sort, after the first pass, the biggest item is in the right place.
After the second pass, the following biggest is set up. This procedure proceeds and
requires n-1 goes to sort n item since the last item must be set up after the (n-1) th
pass.

ALGORITHM: SELECTION SORT (A)


1. 1. k ← length [A]
2. 2. for j ←1 to n-1
3. 3. smallest ← j
4. 4. for I ← j + 1 to k
5. 5. if A [i] < A [ smallest]
6. 6. then smallest ← i
7. 7. exchange (A [j], A [smallest])

How Selection Sort works


1. In the selection sort, first of all, we set the initial element as a minimum.
2. Now we will compare the minimum with the second element. If the second element
turns out to be smaller than the minimum, we will swap them, followed by assigning to a
minimum to the third element.
3. Else if the second element is greater than the minimum, which is our first element, then
we will do nothing and move on to the third element and then compare it with the
minimum.
We will repeat this process until we reach the last element.
4. After the completion of each iteration, we will notice that our minimum has reached the
start of the unsorted list.
5. For each iteration, we will start the indexing from the first element of the unsorted list.
We will repeat the Steps from 1 to 4 until the list gets sorted or all the elements get
correctly positioned.
Consider the following example of an unsorted array that we will sort with the help of
the Selection Sort algorithm.

A[]=(7,4,3,6,5).
A [] =

1st Iteration:

Set minimum = 7

o Compare a0 and a1

As, a0 > a1, set minimum = 4.

o Compare a1 and a2

As, a1 > a2, set minimum = 3.


o Compare a2 and a3

As, a2 < a3, set minimum= 3.

o Compare a2 and a4

As, a2 < a4, set minimum =3.

Since 3 is the smallest element, so we will swap a 0 and a2.

2nd Iteration:

Set minimum = 4

o Compare a1 and a2

As, a1 < a2, set minimum = 4.


o Compare a1 and a3

As, A[1] < A[3], set minimum = 4.

o Compare a1 and a4

Again, a1 < a4, set minimum = 4.

Since the minimum is already placed in the correct position, so there will be no
swapping.

3rd Iteration:

Set minimum = 7

o Compare a2 and a3
As, a2 > a3, set minimum = 6.

o Compare a3 and a4

As, a3 > a4, set minimum = 5.

Since 5 is the smallest element among the leftover unsorted elements, so we will swap 7
and 5.

4th Iteration:

Set minimum = 6

o Compare a3 and a4

As a3 < a4, set minimum = 6.

Since the minimum is already placed in the correct position, so there will be no
swapping.
Complexity Analysis of Selection Sort
Input: Given n input elements.

Output: Number of steps incurred to sort a list.

Logic: If we are given n elements, then in the first pass, it will do n-1 comparisons; in the
second pass, it will do n-2; in the third pass, it will do n-3 and so on. Thus, the total
number of comparisons can be found by;

Therefore, the selection sort algorithm encompasses a time complexity of O(n2) and a
space complexity of O(1) because it necessitates some extra memory space for temp
variable for swapping.

Time Complexities:
o Best Case Complexity: The selection sort algorithm has a best-case time complexity
of O(n2) for the already sorted array.
o Average Case Complexity: The average-case time complexity for the selection sort
algorithm is O(n2), in which the existing elements are in jumbled ordered, i.e., neither in
the ascending order nor in the descending order.
o Worst Case Complexity: The worst-case time complexity is also O(n2), which occurs
when we sort the descending order of an array into the ascending order.
In the selection sort algorithm, the time complexity is O(n2) in all three cases. This is
because, in each step, we are required to find minimum elements so that it can be
placed in the correct position. Once we trace the complete array, we will get our
minimum element.

Linear Search

Linear search is a brute-force approach where elements in


the list or array are sequentially checked from the beginning to
the end until the desired element is found. The algorithm
compares each element with the target value until a match is
found or the entire list has been traversed. If the target value is
found, it returns the index of the matching element else it returns
a special value, such as -1, to indicate that the value was not
found. It is widely used to search an element from the unordered
list, i.e., the list in which items are not sorted.

Linear Search Algorithm


LinearSearch(array, key)
for each item in the array
if item == key
return its index

Working of Linear Search Algorithm in Data


Structures

Let's suppose we need to find element 6 in the given array or list.


We will work according to the above-given algorithm.

1. Start from the first element, and compare the key=6 with
each element x.

2. If x == key, return the index.


3. Else, return not found.

According to the algorithm,

1. Start at the first element of the list or array.


2. Compare the target value with the current element.
3. If the target value matches the current element, return the
index of the current element and terminate the search.
4. If the target value does not match the current element,
move to the next element in the list or array.
5. Repeat steps 2-4 until the target value is found or until
every element has been checked.

Complexity Analysis of Linear Search


Algorithm
1. Time Complexity
Case Time Complexity

Best Case O(1)

Average Case O(n)

Worst Case O(n)

2. Space Complexity: As we are not using any significant


extra memory in linear search, the space complexity is
constant, i.e. O(1)

Advantages of Linear Search


1. Simplicity: Linear search is a simple algorithm to
understand and implement, making it easy to write and
debug. It is an ideal algorithm to use when the list of items
to be searched is small.
2. Efficient for small data sets: For small data sets, linear
search can be more efficient than other search algorithms,
such as binary search, due to the overhead associated with
those algorithms.
3. Memory efficiency: Linear search requires minimal memory,
making it a good choice for systems with limited memory
resources.
4. Flexibility: Linear search can be used for unsorted lists,
which makes it a useful algorithm for cases when the data
is not sorted or when sorting the data is expensive or
impossible.

Disadvantages of Linear Search


1. Time complexity: The worst-case time complexity of linear
search is O(n), where n is the size of the array. This means
that in the worst-case scenario, the linear search may have
to check every element in the array, resulting in a time-
consuming search operation. This makes linear search
inefficient for large arrays, as the search time increases
linearly with the size of the array.
2. Limited applicability: Linear search is not well-suited for
searching sorted arrays, as binary search is a more
efficient algorithm for this purpose.
3. Inefficient for multiple searches: If you need to perform
multiple searches on the same array, a linear search can be
inefficient, as it has to search the entire array every time. In
contrast, other search algorithms like binary search can take
advantage of a pre-sorted array to speed up subsequent
searches.
4. Space complexity: Linear search has a space complexity
of O(1), meaning it requires a constant amount of memory to
perform the search. However, this can be a disadvantage if
the array is very large, as it may not fit in memory.

Decrease and conquer


Decrease and conquer is a technique used to solve problems by reducing the
size of the input data at each step of the solution process. This technique is
similar to divide-and-conquer, in that it breaks down a problem into smaller
subproblems, but the difference is that in decrease-and-conquer, the size of the
input data is reduced at each step. The technique is used when it’s easier to
solve a smaller version of the problem, and the solution to the smaller problem
can be used to find the solution to the original problem.
1. Some examples of problems that can be solved using the decrease-and-
conquer technique include binary search, finding the maximum or minimum
element in an array, and finding the closest pair of points in a set of points.
2. The main advantage of decrease-and-conquer is that it often leads to
efficient algorithms, as the size of the input data is reduced at each step,
reducing the time and space complexity of the solution. However, it’s
important to choose the right strategy for reducing the size of the input data,
as a poor choice can lead to an inefficient algorithm

Advantages of Decrease and Conquer:

1. Simplicity: Decrease-and-conquer is often simpler to implement compared to


other techniques like dynamic programming or divide-and-conquer.
2. Efficient Algorithms: The technique often leads to efficient algorithms as the
size of the input data is reduced at each step, reducing the time and space
complexity of the solution.
3. Problem-Specific: The technique is well-suited for specific problems where
it’s easier to solve a smaller version of the problem.

Disadvantages of Decrease and Conquer:

1. Problem-Specific: The technique is not applicable to all problems and may


not be suitable for more complex problems.
2. Implementation Complexity: The technique can be more complex to
implement when compared to other techniques like divide-and-conquer, and
may require more careful planning.

Insertion sort
Insertion sort is one of the simplest sorting algorithms for the reason that it sorts a
single element at a particular instance. It is not the best sorting algorithm in terms of
performance, but it's slightly more efficient than selection sort and bubble sort in
practical scenarios. It is an intuitive sorting technique.

Let's consider the example of cards to have a better understanding of the logic behind
the insertion sort.

Suppose we have a set of cards in our hand, such that we want to arrange these cards in
ascending order. To sort these cards, we have a number of intuitive ways.

One such thing we can do is initially we can hold all of the cards in our left hand, and we
can start taking cards one after other from the left hand, followed by building a sorted
arrangement in the right hand.

Assuming the first card to be already sorted, we will select the next unsorted card. If the
unsorted card is found to be greater than the selected card, we will simply place it on
the right side, else to the left side. At any stage during this whole process, the left hand
will be unsorted, and the right hand will be sorted.

In the same way, we will sort the rest of the unsorted cards by placing them in the
correct position. At each iteration, the insertion algorithm places an unsorted element at
its right place.

ALGORITHM: INSERTION SORT (A)


1. 1. for j = 2 to A.length
2. 2. key = A[j]
3. 3. // Insert A[j] into the sorted sequence A[1.. j - 1]
4. 4. i = j - 1
5. 5. while i > 0 and A[i] > key
6. 6. A[i + 1] = A[i]
7. 7. ii = i -1
8. 8. A[i + 1] = key

How Insertion Sort Works


1. We will start by assuming the very first element of the array is already sorted. Inside
the key, we will store the second element.
Next, we will compare our first element with the key, such that if the key is found to be
smaller than the first element, we will interchange their indexes or place the key at the
first index. After doing this, we will notice that the first two elements are sorted.

2. Now, we will move on to the third element and compare it with the left-hand side
elements. If it is the smallest element, then we will place the third element at the first
index.

Else if it is greater than the first element and smaller than the second element, then we
will interchange its position with the third element and place it after the first element.
After doing this, we will have our first three elements in a sorted manner.

3. Similarly, we will sort the rest of the elements and place them in their correct position.

Consider the following example of an unsorted array that we will sort with the help of
the Insertion Sort algorithm.

A = (41, 22, 63, 14, 55, 36)

Initially,

1st Iteration:

Set key = 22

Compare a1 with a0

Since a0 > a1, swap both of them.


2nd Iteration:

Set key = 63

Compare a2 with a1 and a0

Since a2 > a1 > a0, keep the array as it is.

3rd Iteration:

Set key = 14

Compare a3 with a2, a1 and a0


Since a3 is the smallest among all the elements on the left-hand side, place a3 at the
beginning of the array.

4th Iteration:

Set key = 55

Compare a4 with a3, a2, a1 and a0.

As a4 < a3, swap both of them.


5th Iteration:

Set key = 36

Compare a5 with a4, a3, a2, a1 and a0.

Since a5 < a2, so we will place the elements in their correct positions.

Hence the array is arranged in ascending order, so no more swapping is required.

Complexity Analysis of Insertion Sort


Input: Given n input elements.

Output: Number of steps incurred to sort a list.

Logic: If we are given n elements, then in the first pass, it will make n-1 comparisons; in
the second pass, it will do n-2; in the third pass, it will do n-3 and so on. Thus, the total
number of comparisons can be found by;
Output;
(n-1) + (n-2) + (n-3) + (n-4) + ...... + 1
Sum=

i.e., O(n2)

Therefore, the insertion sort algorithm encompasses a time complexity of O(n2) and a
space complexity of O(1) because it necessitates some extra memory space for
a key variable to perform swaps.

Time Complexities:
o Best Case Complexity: The insertion sort algorithm has a best-case time complexity
of O(n) for the already sorted array because here, only the outer loop is running n times,
and the inner loop is kept still.
o Average Case Complexity: The average-case time complexity for the insertion sort
algorithm is O(n2), which is incurred when the existing elements are in jumbled order,
i.e., neither in the ascending order nor in the descending order.
o Worst Case Complexity: The worst-case time complexity is also O(n2), which occurs
when we sort the ascending order of an array into the descending order.
In this algorithm, every individual element is compared with the rest of the elements, due
to which n-1 comparisons are made for every n th element.

The insertion sort algorithm is highly recommended, especially when a few elements are
left for sorting or in case the array encompasses few elements.

Space Complexity
The insertion sort encompasses a space complexity of O(1) due to the usage of an extra
variable key.

Insertion Sort Applications


The insertion sort algorithm is used in the following cases:

o When the array contains only a few elements.


o When there exist few elements to sort.

Advantages of Insertion Sort


1. It is simple to implement.
2. It is efficient on small datasets.
3. It is stable (does not change the relative order of elements with equal keys)
4. It is in-place (only requires a constant amount O (1) of extra memory space).
5. It is an online algorithm, which can sort a list when it is received.

Divide and Conquer Introduction


Divide and Conquer is an algorithmic pattern. In algorithmic methods, the design is to
take a dispute on a huge input, break the input into minor pieces, decide the problem
on each of the small pieces, and then merge the piecewise solutions into a global
solution. This mechanism of solving the problem is called the Divide & Conquer
Strategy.

Divide and Conquer algorithm consists of a dispute using the following three steps.

1. Divide the original problem into a set of subproblems.


2. Conquer: Solve every subproblem individually, recursively.
3. Combine: Put together the solutions of the subproblems to get the solution to the
whole problem.

Generally, we can follow the divide-and-conquer approach in a three-step process.


Applications of Divide and Conquer
Approach:
Following algorithms are based on the concept of the Divide and Conquer Technique:

1. Binary Search: The binary search algorithm is a searching algorithm, which is also called
a half-interval search or logarithmic search. It works by comparing the target value with
the middle element existing in a sorted array. After making the comparison, if the value
differs, then the half that cannot contain the target will eventually eliminate, followed by
continuing the search on the other half. We will again consider the middle element and
compare it with the target value. The process keeps on repeating until the target value is
met. If we found the other half to be empty after ending the search, then it can be
concluded that the target is not present in the array.
2. Quicksort: It is the most efficient sorting algorithm, which is also known as partition-
exchange sort. It starts by selecting a pivot value from an array followed by dividing the
rest of the array elements into two sub-arrays. The partition is made by comparing each
of the elements with the pivot value. It compares whether the element holds a greater
value or lesser value than the pivot and then sort the arrays recursively.
3. Merge Sort: It is a sorting algorithm that sorts an array by making comparisons. It starts
by dividing an array into sub-array and then recursively sorts each of them. After the
sorting is done, it merges them back.
4. Closest Pair of Points: It is a problem of computational geometry. This algorithm
emphasizes finding out the closest pair of points in a metric space, given n points, such
that the distance between the pair of points should be minimal.
5. Strassen's Algorithm: It is an algorithm for matrix multiplication, which is named after
Volker Strassen. It has proven to be much faster than the traditional algorithm when
works on large matrices.
6. Cooley-Tukey Fast Fourier Transform (FFT) algorithm: The Fast Fourier Transform
algorithm is named after J. W. Cooley and John Turkey. It follows the Divide and Conquer
Approach and imposes a complexity of O(nlogn).
7. Karatsuba algorithm for fast multiplication: It is one of the fastest multiplication
algorithms of the traditional time, invented by Anatoly Karatsuba in late 1960 and got
published in 1962. It multiplies two n-digit numbers in such a way by reducing it to at
most single-digit.

Advantages of Divide and Conquer


o Divide and Conquer tend to successfully solve one of the biggest problems, such as the
Tower of Hanoi, a mathematical puzzle. It is challenging to solve complicated problems
for which you have no basic idea, but with the help of the divide and conquer approach,
it has lessened the effort as it works on dividing the main problem into two halves and
then solve them recursively. This algorithm is much faster than other algorithms.
o It efficiently uses cache memory without occupying much space because it solves simple
subproblems within the cache memory instead of accessing the slower main memory.
o It is more proficient than that of its counterpart Brute Force technique.
o Since these algorithms inhibit parallelism, it does not involve any modification and is
handled by systems incorporating parallel processing.

Disadvantages of Divide and Conquer


o Since most of its algorithms are designed by incorporating recursion, so it necessitates
high memory management.
o An explicit stack may overuse the space.
Merge Sort
Merge sort is yet another sorting algorithm that falls under the category of Divide and
Conquer technique. It is one of the best sorting techniques that successfully build a
recursive algorithm.

Divide and Conquer Strategy


In this technique, we segment a problem into two halves and solve them individually.
After finding the solution of each half, we merge them back to represent the solution of
the main problem.

Suppose we have an array A, such that our main concern will be to sort the subsection,
which starts at index p and ends at index r, represented by A[p..r].

Divide

ADVERTISEMENT

If assumed q to be the central point somewhere in between p and r, then we will


fragment the subarray A[p..r] into two arrays A[p..q] and A[q+1, r].

Conquer

After splitting the arrays into two halves, the next step is to conquer. In this step, we
individually sort both of the subarrays A[p..q] and A[q+1, r]. In case if we did not reach
the base situation, then we again follow the same procedure, i.e., we further segment
these subarrays followed by sorting them separately.

Combine

As when the base step is acquired by the conquer step, we successfully get our sorted
subarrays A[p..q] and A[q+1, r], after which we merge them back to form a new sorted
array [p..r].

Merge Sort algorithm


The MergeSort function keeps on splitting an array into two halves until a condition is
met where we try to perform MergeSort on a subarray of size 1, i.e., p == r.
And then, it combines the individually sorted subarrays into larger arrays until the whole
array is merged.

1. ALGORITHM-MERGE SORT
2. 1. If p<r
3. 2. Then q → ( p+ r)/2
4. 3. MERGE-SORT (A, p, q)
5. 4. MERGE-SORT ( A, q+1,r)
6. 5. MERGE ( A, p, q, r)

Here we called MergeSort(A, 0, length(A)-1) to sort the complete array.

As you can see in the image given below, the merge sort algorithm recursively divides
the array into halves until the base condition is met, where we are left with only 1
element in the array. And then, the merge function picks up the sorted sub-arrays and
merge them back to sort the entire array.

The following figure illustrates the dividing (splitting) procedure.

1. FUNCTIONS: MERGE (A, p, q, r)


2. 1. n 1 = q-p+1
3. 2. n 2= r-q
4. 3. create arrays [1.....n 1 + 1] and R [ 1.....n 2 +1 ]
5. 4. for i ← 1 to n 1
6. 5. do [i] ← A [ p+ i-1]
7. 6. for j ← 1 to n2
8. 7. do R[j] ← A[ q + j]
9. 8. L [n 1+ 1] ← ∞
10. 9. R[n 2+ 1] ← ∞
11. 10. I ← 1
12. 11. J ← 1
13. 12. For k ← p to r
14. 13. Do if L [i] ≤ R[j]
15. 14. then A[k] ← L[ i]
16. 15. i ← i +1
17. 16. else A[k] ← R[j]
18. 17. j ← j+1

The merge step of Merge Sort


Mainly the recursive algorithm depends on a base case as well as its ability to merge
back the results derived from the base cases. Merge sort is no different algorithm, just
the fact here the merge step possesses more importance.
To any given problem, the merge step is one such solution that combines the two
individually sorted lists(arrays) to build one large sorted list(array).

The merge sort algorithm upholds three pointers, i.e., one for both of the two arrays and
the other one to preserve the final sorted array's current index.

1. Did you reach the end of the array?


2. No:
3. Firstly, start with comparing the current elements of both the arrays.
4. Next, copy the smaller element into the sorted array.
5. Lastly, move the pointer of the element containing a smaller element.
6. Yes:
7. Simply copy the rest of the elements of the non-empty array

Merge( ) Function Explained Step-By-Step


Consider the following example of an unsorted array, which we are going to sort with
the help of the Merge Sort algorithm.

A= (36,25,40,2,7,80,15)

Step1: The merge sort algorithm iteratively divides an array into equal halves until we
achieve an atomic value. In case if there are an odd number of elements in an array,
then one of the halves will have more elements than the other half.

Step2: After dividing an array into two subarrays, we will notice that it did not hamper
the order of elements as they were in the original array. After now, we will further divide
these two arrays into other halves.

Step3: Again, we will divide these arrays until we achieve an atomic value, i.e., a value
that cannot be further divided.

Step4: Next, we will merge them back in the same way as they were broken down.

Step5: For each list, we will first compare the element and then combine them to form a
new sorted list.

Step6: In the next iteration, we will compare the lists of two data values and merge
them back into a list of found data values, all placed in a sorted manner.
Hence the array is sorted.

Analysis of Merge Sort:


Let T (n) be the total time taken by the Merge Sort algorithm.

o Sorting two halves will take at the most 2T time.


o When we merge the sorted lists, we come up with a total n-1 comparison because the
last element which is left will need to be copied down in the combined list, and there will
be no comparison.

Thus, the relational formula will be

But we ignore '-1' because the element will take some time to be copied in merge lists.
So T (n) = 2T + n...equation 1

Note: Stopping Condition T (1) =0 because at last, there will be only 1 element left
that need to be copied, and there will be no comparison.

Put 2 equation in 1 equation

Putting 4 equation in 3 equation

From Stopping Condition:


Apply log both sides:

log n=log2i
logn= i log2

=i

log2n=i

From 6 equation

Best Case Complexity: The merge sort algorithm has a best-case time complexity
of O(n*log n) for the already sorted array.

Average Case Complexity: The average-case time complexity for the merge sort
algorithm is O(n*log n), which happens when 2 or more elements are jumbled, i.e.,
neither in the ascending order nor in the descending order.

Worst Case Complexity: The worst-case time complexity is also O(n*log n), which
occurs when we sort the descending order of an array into the ascending order.

Space Complexity: The space complexity of merge sort is O(n).

Merge Sort Applications


The concept of merge sort is applicable in the following areas:

o Inversion count problem


o External sorting
o E-commerce applications

Quick sort
It is an algorithm of Divide & Conquer type.

Divide: Rearrange the elements and split arrays into two sub-arrays and an element in
between search that each element in left sub array is less than or equal to the average
element and each element in the right sub- array is larger than the middle element.

Conquer: Recursively, sort two sub arrays.

Combine: Combine the already sorted array.

Algorithm:

1. QUICKSORT (array A, int m, int n)


2. 1 if (n > m)
3. 2 then
4. 3 i ← a random index from [m,n]
5. 4 swap A [i] with A[m]
6. 5 o ← PARTITION (A, m, n)
7. 6 QUICKSORT (A, m, o - 1)
8. 7 QUICKSORT (A, o + 1, n)

Partition Algorithm:
Partition algorithm rearranges the sub arrays in a place.

1. PARTITION (array A, int m, int n)


2. 1 x ← A[m]
3. 2o←m
4. 3 for p ← m + 1 to n
5. 4 do if (A[p] < x)
6. 5 then o ← o + 1
7. 6 swap A[o] with A[p]
8. 7 swap A[m] with A[o]
9. 8 return o

Figure: shows the execution trace partition algorithm

Example of Quick Sort:


1. 44 33 11 55 77 90 40 60 99 22 88

Let 44 be the Pivot element and scanning done from right to left

Comparing 44 to the right-side elements, and if right-side elements


are smaller than 44, then swap it. As 22 is smaller than 44 so swap them.

22 33 11 55 77 90 40 60 99 44
88

Now comparing 44 to the left side element and the element must be greater than 44
then swap them. As 55 are greater than 44 so swap them.

22 33 11 44 77 90 40 60 99 55
88
Recursively, repeating steps 1 & steps 2 until we get two lists one left from pivot
element 44 & one right from pivot element.

22 33 11 40 77 90 44 60 99 55
88

Swap with 77:

22 33 11 40 44 90 77 60 99 55
88

Now, the element on the right side and left side are greater than and smaller
than 44 respectively.

Now we get two sorted lists:

And these sub lists are sorted under the same process as above done.

These two sorted sub lists side by side.


Merging Sublists:

SORTED LISTS

Worst Case Analysis: It is the case when items are already in sorted form and we try to
sort them again. This will takes lots of time and space.

Equation:
1. T (n) =T(1)+T(n-1)+n

T (1) is time taken by pivot element.

T (n-1) is time taken by remaining element except for pivot element.

N: the number of comparisons required to identify the exact position of itself (every
element)

ADVERTISEMENT

ADVERTISEMENT

If we compare first element pivot with other, then there will be 5 comparisons.

It means there will be n comparisons if there are n items.


Relational Formula for Worst Case:

Note: for making T (n-4) as T (1) we will put (n-1) in place of '4' and if
We put (n-1) in place of 4 then we have to put (n-2) in place of 3 and (n-3)
In place of 2 and so on.

T(n)=(n-1) T(1) + T(n-(n-1))+(n-(n-2))+(n-(n-3))+(n-(n-4))+n


T (n) = (n-1) T (1) + T (1) + 2 + 3 + 4+............n
T (n) = (n-1) T (1) +T (1) +2+3+4+...........+n+1-1

[Adding 1 and subtracting 1 for making AP series]

T (n) = (n-1) T (1) +T (1) +1+2+3+4+........ + n-1

T (n) = (n-1) T (1) +T (1) + -1

Stopping Condition: T (1) =0

Because at last there is only one element left and no comparison is required.

T (n) = (n-1) (0) +0+ -1


Worst Case Complexity of Quick Sort is T (n) =O (n2)

Randomized Quick Sort [Average Case]:


Generally, we assume the first element of the list as the pivot element. In an average
Case, the number of chances to get a pivot element is equal to the number of items.

1. Let total time taken =T (n)


2. For eg: In a given list
3. p 1, p 2, p 3, p 4............pn
4. If p 1 is the pivot list then we have 2 lists.
5. I.e. T (0) and T (n-1)
6. If p2 is the pivot list then we have 2 lists.
7. I.e. T (1) and T (n-2)
8. p 1, p 2, p 3, p 4............pn
9. If p3 is the pivot list then we have 2 lists.
10. I.e. T (2) and T (n-3)
11. p 1, p 2, p 3, p 4............p n

So in general if we take the Kth element to be the pivot element.

Then,

Pivot element will do n comparison and we are doing average case so,

So Relational Formula for Randomized Quick Sort is:


= n+1 + (T(0)+T(1)+T(2)+...T(n-1)+T(n-2)+T(n-3)+...T(0))

= n+1 + x2 (T(0)+T(1)+T(2)+...T(n-2)+T(n-1))

1. n T (n) = n (n+1) +2 (T(0)+T(1)+T(2)+...T(n-1)........eq 1

Put n=n-1 in eq 1

1. (n -1) T (n-1) = (n-1) n+2 (T(0)+T(1)+T(2)+...T(n-2)......eq2

From eq1 and eq 2

n T (n) - (n-1) T (n-1)= n(n+1)-n(n-1)+2 (T(0)+T(1)+T(2)+?T(n-2)+T(n-1))-


2(T(0)+T(1)+T(2)+...T(n-2))
n T(n)- (n-1) T(n-1)= n[n+1-n+1]+2T(n-1)
n T(n)=[2+(n-1)]T(n-1)+2n
n T(n)= n+1 T(n-1)+2n

Put n=n-1 in eq 3

Put 4 eq in 3 eq

Put n=n-2 in eq 3
Put 6 eq in 5 eq

Put n=n-3 in eq 3

Put 8 eq in 7 eq

From 3eq, 5eq, 7eq, 9 eq we get

From 10 eq
Multiply and divide the last term by 2

Is the average case complexity of quick sort for sorting n elements.


3. Quick Sort [Best Case]: In any sorting, best case is the only case in which we don't
make any comparison between elements that is only done when we have only one
element to sort.

Consider an array arr[] = {2, 5, 8, 12, 16, 23, 38, 56, 72, 91}, and the target =
23.
First Step: Calculate the mid and compare the mid element with the key. If the
key is less than mid element, move to left and if it is greater than the mid then
move search space to the right.
 Key (i.e., 23) is greater than current mid element (i.e., 16). The search space

moves to the right.

Binary Search Algorithm : Compare key with 16

 Key is less than the current mid 56. The search space moves to the left.
Binary Search Algorithm : Compare key with 56

Second Step: If the key matches the value of the mid element, the element is
found and stop search.

Binary Search Algorithm : Key matches with mid

How to Implement Binary Search?


The Binary Search Algorithm can be implemented in the following two ways
 Iterative Binary Search Algorithm
 Recursive Binary Search Algorithm
Given below are the pseudocodes for the approaches.
Complexity Analysis of Binary Search:
 Time Complexity:
 Best Case: O(1)
 Average Case: O(log N)
 Worst Case: O(log N)
 Auxiliary Space: O(1), If the recursive call stack is considered then the
auxiliary space will be O(logN).
Advantages of Binary Search:
 Binary search is faster than linear search, especially for large arrays.
 More efficient than other searching algorithms with a similar time complexity,
such as interpolation search or exponential search.
 Binary search is well-suited for searching large datasets that are stored in
external memory, such as on a hard drive or in the cloud.
Drawbacks of Binary Search:
 The array should be sorted.
 Binary search requires that the data structure being searched be stored in
contiguous memory locations.
 Binary search requires that the elements of the array be comparable,
meaning that they must be able to be ordered.
Applications of Binary Search:
 Binary search can be used as a building block for more complex algorithms
used in machine learning, such as algorithms for training neural networks or
finding the optimal hyperparameters for a model.
 It can be used for searching in computer graphics such as algorithms for ray
tracing or texture mapping.
 It can be used for searching a database.

Dynamic programming - Fibonacci sequence Backtracking – Concepts only

What is Dynamic Programming:


Dynamic programming is a technique to solve the recursive problems in more
efficient manner. Many times in recursion we solve the sub-problems repeatedly.
In dynamic programming we store the solution of these sub-problems so that we
do not have to solve them again, this is called Memoization.
Dynamic programming and memoization works together. So Most of the
problems are solved with two components of dynamic programming (DP)-
 Recursion - Solve the sub-problems recursively
 Memoization - Store the solution of these sub-problems so that we do
not have to solve them again
Example:
Fibonacci Series : The current number is the sum of previous two number. If can
be defined as

Fibonacchi(N)
=0 for n=0
=1 for n=1

= Fibonacchi(N-1)+Finacchi(N-2) for n>1

def fib_recur(x):

if x == 0:

return 0

elif x == 1:

return 1

else:

return fib_recur(x - 1) + fib_recur(x - 2)

if __name__ == "__main__":

result = fib_recur(10)

print(result)
Now as you can see in the picture above while you are calculating Fibonacci(4)
you need Fibonacci(3) and Fibonacci(2), Now for Fibonacci(3), you need Fibonacci
(2) and Fibonacci (1) but you notice you have calculated Fibonacci(2) while
calculating Fibonacci(4) and again calculating it. So we are solving many sub-
problems again and again.

Time Complexity:
T(n) = T(n-1) + T(n-2) + 1 = 2n = O(2n)

A greedy algorithm is an approach for solving a problem by selecting the best


option available at the moment. It doesn't worry whether the current best
result will bring the overall optimal result.

The algorithm never reverses the earlier decision even if the choice is wrong.
It works in a top-down approach.

This algorithm may not produce the best result for all the problems. It's
because it always goes for the local best choice to produce the global best
result.
However, we can determine if the algorithm can be used with any problem if
the problem has the following properties:

1. Greedy Choice Property


If an optimal solution to the problem can be found by choosing the best choice
at each step without reconsidering the previous steps once chosen, the
problem can be solved using a greedy approach. This property is called
greedy choice property.

2. Optimal Substructure
If the optimal overall solution to the problem corresponds to the optimal
solution to its subproblems, then the problem can be solved using a greedy
approach. This property is called optimal substructure.

Advantages of Greedy Approach


 The algorithm is easier to describe.
 This algorithm can perform better than other algorithms (but, not in all
cases).

Drawback of Greedy Approach


As mentioned earlier, the greedy algorithm doesn't always produce the
optimal solution. This is the major disadvantage of the algorithm
For example, suppose we want to find the longest path in the graph below
from root to leaf. Let's use the greedy algorithm here.

Apply greedy approach to


this tree to find the longest route

Greedy Approach
1. Let's start with the root node 20. The weight of the right child is 3 and the
weight of the left child is 2.
2. Our problem is to find the largest path. And, the optimal solution at the
moment is 3. So, the greedy algorithm will choose 3.
3. Finally the weight of an only child of 3 is 1. This gives us our final result 20 +

3 + 1 = 24 .

However, it is not the optimal solution. There is another path that carries more
weight ( 20 + 2 + 10 = 32 ) as shown in the image below.
Longest path

Therefore, greedy algorithms do not always give an optimal/feasible solution.

Greedy Algorithm
1. To begin with, the solution set (containing answers) is empty.

2. At each step, an item is added to the solution set until a solution is reached.

3. If the solution set is feasible, the current item is kept.

4. Else, the item is rejected and never considered again.

Let's now use this algorithm to solve a problem.

Example - Greedy Approach


Problem: You have to make a change of an amount using the smallest possible
number of coins.

Amount: $18

Available coins are

$5 coin

$2 coin

$1 coin

There is no limit to the number of each coin you can use.

Solution:
1. Create an empty solution-set = { } . Available coins are {5, 2, 1} .

2. We are supposed to find the sum = 18 . Let's start with sum = 0 .

3. Always select the coin with the largest value (i.e. 5) until the sum > 18 . (When
we select the largest value at each step, we hope to reach the destination
faster. This concept is called greedy choice property.)
4. In the first iteration, solution-set = {5} and sum = 5 .

5. In the second iteration, solution-set = {5, 5} and sum = 10 .

6. In the third iteration, solution-set = {5, 5, 5} and sum = 15 .

7. In the fourth iteration, solution-set = {5, 5, 5, 2} and sum = 17 . (We cannot


select 5 here because if we do so, sum = 20 which is greater than 18. So, we
select the 2nd largest item which is 2.)
8. Similarly, in the fifth iteration, select 1. Now sum = 18 and solution-set = {5, 5,

5, 2, 1} .
Different Types of Greedy Algorithm
 Selection Sort
 Knapsack Problem
 Minimum Spanning Tree
 Single-Source Shortest Path Problem
 Job Scheduling Problem

 Prim's Minimal Spanning Tree Algorithm


 Kruskal's Minimal Spanning Tree Algorithm
 Dijkstra's Minimal Spanning Tree Algorithm
 Huffman Coding
 Ford-Fulkerson Algorithm

Differences between the linear and non linear data structure


Linear Data Structure Non Linear Data Structure
The elements are joined to one another
The elements are grouped hierarchically
Overview and arranged sequentially or linearly in
or non-linearly in this structure.
this structure.
A linear data structure includes arrays, A non-linear data structure is made up of
Types
linked lists,queues and stacks. trees and graphs.
They are simple to execute because to They're challenging to put in place
Implementation
the linear organisation. because of the non-linear structure.
Because a linear data structure has only A non-linear data structure's data
Traversal one level, traversing each data item elements cannot be retrieved in a single
requires only one run. run. It needs traversing many runs.
Each data item is linked to the one
Arrangement Each item is linked to several others.
before it and the one after it.
Memory The memory usage is inefficient in this Memory is used to its full potential in
Utilization case. this way.
There is no hierarchy in this data
The data items are placed in various
Levels structure, and all data items are grouped
levels in this method.
on a single level.
Time With increasing input size, the time With increasing input size, the time
Complexity complexity of linear data structures complexity of non-linear data structures
Linear Data Structure Non Linear Data Structure
increases. frequently stays the same.
Image processing and Artificial
Linear data structures are mostly
Applications Intelligence both use non-linear data
utilised in software development.
structures.

What is Linked List in Python


A linked list is a type of linear data structure similar to arrays. It is a collection of
nodes that are linked with each other. A node contains two things first is data
and second is a link that connects it with another node. Below is an example of
a linked list with four nodes and each node contains character data and a link to
another node. Our first node is where head points and we can access all the
elements of the linked list using the head.

Linked List

Creating a linked list in Python


In this LinkedList class, we will use the Node class to create a linked list. In this
class, we have an __init__ method that initializes the linked list with an empty
head. Next, we have created an insertAtBegin() method to insert a node at the
beginning of the linked list, an insertAtIndex() method to insert a node at the
given index of the linked list, and insertAtEnd() method inserts a node at the
end of the linked list. After that, we have the remove_node() method which
takes the data as an argument to delete that node. In
the remove_node() method we traverse the linked list if a node is present
equal to data then we delete that node from the linked list. Then we have
the sizeOfLL() method to get the current size of the linked list and the last
method of the LinkedList class is printLL() which traverses the linked list and
prints the data of each node.
Creating a Node Class
We have created a Node class in which we have defined a __init__ function to
initialize the node with the data passed as an argument and a reference with
None because if we have only one node then there is nothing in its reference.
 Python3

class Node:

def __init__(self, data):

self.data = data

self.next = None

Insertion in Linked List


Insertion at Beginning in Linked List
This method inserts the node at the beginning of the linked list. In this method,
we create a new_node with the given data and check if the head is an empty
node or not if the head is empty then we make
the new_node as head and return else we insert the head at the
next new_node and make the head equal to new_node.
 Python3

def insertAtBegin(self, data):

new_node = Node(data)

if self.head is None:

self.head = new_node

return

else:

new_node.next = self.head
self.head = new_node

Insert a Node at a Specific Position in a Linked List


This method inserts the node at the given index in the linked list. In this method,
we create a new_node with given data , a current_node that equals to the
head, and a counter ‘position’ initializes with 0. Now, if the index is equal to
zero it means the node is to be inserted at begin so we called insertAtBegin()
method else we run a while loop until the current_node is not equal
to None or (position+1) is not equal to the index we have to at the one position
back to insert at a given position to make the linking of nodes and in each
iteration, we increment the position by 1 and make the current_node next of it.
When the loop breaks and if current_node is not equal to None we insert
new_node at after to the current_node. If current_node is equal to None it
means that the index is not present in the list and we print “Index not
present”.
 Python3

# Indexing starts from 0.

def insertAtIndex(self, data, index):

new_node = Node(data)

current_node = self.head

position = 0

if position == index:

self.insertAtBegin(data)

else:

while(current_node != None and position+1 != index):


position = position+1

current_node = current_node.next

if current_node != None:

new_node.next = current_node.next

current_node.next = new_node

else:

print("Index not present")

Insertion in Linked List at End


This method inserts the node at the end of the linked list. In this method, we
create a new_node with the given data and check if the head is an empty node
or not if the head is empty then we make
the new_node as head and return else we make a current_node equal to the
head traverse to the last node of the linked list and when we get None after the
current_node the while loop breaks and insert the new_node in the next
of current_node which is the last node of linked list.
 Python3

def inserAtEnd(self, data):

new_node = Node(data)

if self.head is None:
self.head = new_node

return

current_node = self.head

while(current_node.next):

current_node = current_node.next

current_node.next = new_node

Update the Node of a Linked List


This code defines a method called updateNode in a linked list class. It is used
to update the value of a node at a given position in the linked list.
 Python3

# Update node of a linked list

# at given position

def updateNode(self, val, index):

current_node = self.head

position = 0

if position == index:
current_node.data = val

else:

while(current_node != None and position != index):

position = position+1

current_node = current_node.next

if current_node != None:

current_node.data = val

else:

print("Index not present")

Delete Node in a Linked List


Remove First Node from Linked List
This method removes the first node of the linked list simply by making the
second node head of the linked list.
 Python3

def remove_first_node(self):

if(self.head == None):

return
self.head = self.head.next

Remove Last Node from Linked List


In this method, we will delete the last node. First, we traverse to the second last
node using the while loop, and then we make the next of that node None and
last node will be removed.
 Python3

def remove_last_node(self):

if self.head is None:

return

current_node = self.head

while(current_node.next.next):

current_node = current_node.next

current_node.next = None

Delete a Linked List Node at a given Position


In this method, we will remove the node at the given index, this method is
similar to the insert_at_inded() method. In this method, if the head is None we
simply return else we initialize
a current_node with self.head and position with 0. If the position is equal to
the index we called the remove_first_node() method else we traverse to the
one node before that we want to remove using the while loop. After that when
we out of the while loop we check that current_node is equal to None if not
then we make the next of current_node equal to the next of node that we want
to remove else we print the message “Index not
present” because current_node is equal to None.
 Python3

# Method to remove at given index

def remove_at_index(self, index):

if self.head == None:

return

current_node = self.head

position = 0

if position == index:

self.remove_first_node()

else:

while(current_node != None and position+1 != index):

position = position+1

current_node = current_node.next

if current_node != None:
current_node.next = current_node.next.next

else:

print("Index not present")

Delete a Linked List Node of a given Data


This method removes the node with the given data from the linked list. In this
method, firstly we made a current_node equal to the head and run a while
loop to traverse the linked list. This while loop breaks
when current_node becomes None or the data next to the current node is
equal to the data given in the argument. Now, After coming out of the loop if
the current_node is equal to None it means that the node is not present in the
data and we just return, and if the data next to the current_node is equal to the
data given then we remove that node by making next of that removed_node to
the next of current_node. And this is implemented using the if else condition.
 Python3

def remove_node(self, data):

current_node = self.head

# Check if the head node contains the specified data

if current_node.data == data:

self.remove_first_node()

return
while current_node is not None and current_node.next.data != data:

current_node = current_node.next

if current_node is None:

return

else:

current_node.next = current_node.next.next

Linked List Traversal in Python


This method traverses the linked list and prints the data of each node. In this
method, we made a current_node equal to the head and iterate through the
linked list using a while loop until the current_node become None and print
the data of current_node in each iteration and make the current_node next to
it.
 Python3

def printLL(self):

current_node = self.head

while(current_node):

print(current_node.data)

current_node = current_node.next

Get Length of a Linked List in Python


This method returns the size of the linked list. In this method, we have initialized
a counter ‘size’ with 0, and then if the head is not equal to None we traverse
the linked list using a while loop and increment the size with 1 in each iteration
and return the size when current_node becomes None else we return 0.
 Python3

def sizeOfLL(self):

size = 0

if(self.head):

current_node = self.head

while(current_node):

size = size+1

current_node = current_node.next

return size

else:

return 0

Types Of Linked List:


1. Singly Linked List
It is the simplest type of linked list in which every node contains some data and
a pointer to the next node of the same data type.

The node contains a pointer to the next node means that the node stores the
address of the next node in the sequence. A single linked list allows the
traversal of data only in one way. Below is the image for the same:
# Node of a singly linked list

class Node:

def __init__(self, data):

self.data = data

self.next = None

Creation and Traversal of Singly Linked List:

# structure of Node

class Node:

def __init__(self, data):

self.data = data
self.next = None

class LinkedList:

def __init__(self):

self.head = None

self.last_node = None

# function to add elements to linked list

def append(self, data):

# if linked list is empty then last_node will be none so in if condition head wil

if self.last_node is None:

self.head = Node(data)

self.last_node = self.head

# adding node to the tail of linked list

else:

self.last_node.next = Node(data)

self.last_node = self.last_node.next
# function to print the content of linked list

def display(self):

current = self.head

# traversing the linked list

while current is not None:

# at each node printing its data

print(current.data, end=' ')

# giving current next node

current = current.next

print()

# Driver code

if __name__ == '__main__':

L = LinkedList()

# adding elements to the linked list

L.append(1)

L.append(2)

L.append(3)
# displaying elements of linked list

L.display()

Output
1 2 3
Time Complexity: O(N)
Auxiliary Space: O(N)
2. Doubly Linked List
A doubly linked list or a two-way linked list is a more complex type of linked list
that contains a pointer to the next as well as the previous node in sequence.
Therefore, it contains three parts of data, a pointer to the next node, and a
pointer to the previous node. This would enable us to traverse the list in the
backward direction as well. Below is the image for the same:

Structure of Doubly Linked List:

# structure of Node

class Node:

def __init__(self, data):

self.previous = None
self.data = data

self.next = None

Creation and Traversal of Doubly Linked List:

# Python3 program to illustrate

# creation and traversal of

# Doubly Linked List

# structure of Node

class Node:

def __init__(self, data):

self.previous = None

self.data = data

self.next = None

class DoublyLinkedList:

def __init__(self):
self.head = None

self.start_node = None

self.last_node = None

# function to add elements to doubly linked list

def append(self, data):

# is doubly linked list is empty then last_node will be none so in if condition h

if self.last_node is None:

self.head = Node(data)

self.last_node = self.head

# adding node to the tail of doubly linked list

else:

new_node = Node(data)

self.last_node.next = new_node

new_node.previous = self.last_node

new_node.next = None

self.last_node = new_node

# function to printing and traversing the content of doubly linked list from left to
def display(self, Type):

if Type == 'Left_To_Right':

current = self.head

while current is not None:

print(current.data, end=' ')

current = current.next

print()

else:

current = self.last_node

while current is not None:

print(current.data, end=' ')

current = current.previous

print()

# Driver code

if __name__ == '__main__':

L = DoublyLinkedList()

L.append(1)

L.append(2)
L.append(3)

L.append(4)

L.display('Left_To_Right')

L.display('Right_To_Left')

Output
Created DLL is:
Traversal in forward direction
1 7 6
Traversal in reverse direction
6 7 1
Time Complexity:
The time complexity of the push() function is O(1) as it performs constant-time
operations to insert a new node at the beginning of the doubly linked list. The
time complexity of the printList() function is O(n) where n is the number of
nodes in the doubly linked list. This is because it traverses the entire list twice,
once in the forward direction and once in the backward direction. Therefore, the
overall time complexity of the program is O(n).
Space Complexity:
The space complexity of the program is O(n) as it uses a doubly linked list to
store the data, which requires n nodes. Additionally, it uses a constant amount
of auxiliary space to create a new node in the push() function. Therefore, the
overall space complexity of the program is O(n).
3. Circular Linked List
A circular linked list is that in which the last node contains the pointer to the first
node of the list.
While traversing a circular linked list, we can begin at any node and traverse the
list in any direction forward and backward until we reach the same node we
started. Thus, a circular linked list has no beginning and no end. Below is the
image for the same:
Below is the structure of the Circular Linked List:

# structure of Node

class Node:

def __init__(self, data):

self.data = data

self.next = None

Creation and Traversal of Circular Linked List:

# Python3 program to illustrate

# creation and traversal of

# Circular LL

# structure of Node

class Node:

def __init__(self, data):


self.data = data

self.next = None

class CircularLinkedList:

def __init__(self):

self.head = None

self.last_node = None

# function to add elements to circular linked list

def append(self, data):

# is circular linked list is empty then last_node will be none so in if condition

if self.last_node is None:

self.head = Node(data)

self.last_node = self.head

# adding node to the tail of circular linked list

else:

self.last_node.next = Node(data)

self.last_node = self.last_node.next
self.last_node.next = self.head

# function to print the content of circular linked list

def display(self):

current = self.head

while current is not None:

print(current.data, end=' ')

current = current.next

if current == self.head:

break

print()

# Driver code

if __name__ == '__main__':

L = CircularLinkedList()

L.append(12)

L.append(56)

L.append(2)

L.append(11)
# Function call

L.display()

Output
Contents of Circular Linked List
11 2 56 12
Time Complexity:
Insertion at the beginning of the circular linked list takes O(1) time complexity.
Traversing and printing all nodes in the circular linked list takes O(n) time
complexity where n is the number of nodes in the linked list.
Therefore, the overall time complexity of the program is O(n).
Auxiliary Space:
The space required by the program depends on the number of nodes in the
circular linked list.
In the worst-case scenario, when there are n nodes, the space complexity of
the program will be O(n) as n new nodes will be created to store the data.
Additionally, some extra space is required for the temporary variables and the
function calls.
Therefore, the auxiliary space complexity of the program is O(n).
4. Doubly Circular linked list
A Doubly Circular linked list or a circular two-way linked list is a more complex
type of linked list that contains a pointer to the next as well as the previous node
in the sequence. The difference between the doubly linked and circular doubly
list is the same as that between a singly linked list and a circular linked list. The
circular doubly linked list does not contain null in the previous field of the first
node. Below is the image for the same:
Below is the structure of the Doubly Circular Linked List:

# structure of Node

class Node:

def __init__(self, data):

self.previous = None

self.data = data

self.next = None

Creation and Traversal of Doubly Circular Linked List:

# Python3 program to illustrate creation

# & traversal of Doubly Circular LL


# structure of Node

class Node:

def __init__(self, data):

self.previous = None

self.data = data

self.next = None

class DoublyLinkedList:

def __init__(self):

self.head = None

self.start_node = None

self.last_node = None

# function to add elements to doubly linked list

def append(self, data):

# is doubly linked list is empty then last_node will be none so in if condition h

if self.last_node is None:

self.head = Node(data)
self.last_node = self.head

# adding node to the tail of doubly linked list

else:

new_node = Node(data)

self.last_node.next = new_node

new_node.previous = self.last_node

new_node.next = self.head

self.head.previous = new_node

self.last_node = new_node

# function to print the content of doubly linked list

def display(self, Type='Left_To_Right'):

if Type == 'Left_To_Right':

current = self.head

while current.next is not None:

print(current.data, end=' ')

current = current.next

if current == self.head:

break
print()

else:

current = self.last_node

while current.previous is not None:

print(current.data, end=' ')

current = current.previous

if current == self.last_node.next:

print(self.last_node.next.data, end=' ')

break

print()

if __name__ == '__main__':

L = DoublyLinkedList()

L.append(1)

L.append(2)

L.append(3)

L.append(4)

L.display('Left_To_Right')
L.display('Right_To_Left')

Output
Created circular doubly linked list is:
Traversal in forward direction
7 4 5
Traversal in reverse direction
5 4 7
Time Complexity:
Insertion at the beginning of a doubly circular linked list takes O(1) time
complexity.
Traversing the entire doubly circular linked list takes O(n) time complexity,
where n is the number of nodes in the linked list.
Therefore, the overall time complexity of the program is O(n).
Auxiliary space:
The program uses a constant amount of auxiliary space, i.e., O(1), to create
and traverse the doubly circular linked list. The space required to store the
linked list grows linearly with the number of nodes in the linked list.
Therefore, the overall auxiliary space complexity of the program is O(1).

Singly linked list (SLL) Doubly linked list (DLL)

SLL nodes contains 2 field -data field and next DLL nodes contains 3 fields -data field,
link field. a previous link field and a next link field.
Singly linked list (SLL) Doubly linked list (DLL)

In DLL, the traversal can be done using


In SLL, the traversal can be done using the
the previous node link or the next node
next node link only. Thus traversal is possible
link. Thus traversal is possible in both
in one direction only.
directions (forward and backward).

The SLL occupies less memory than DLL as it The DLL occupies more memory than
has only 2 fields. SLL as it has 3 fields.

Complexity of insertion and deletion at a


Complexity of insertion and deletion at a given given position is O(n / 2) = O(n)
position is O(n). because traversal can be made from
start or from the end.

Complexity of deletion with a given node is Complexity of deletion with a given


O(n), because the previous node needs to be node is O(1) because the previous node
known, and traversal takes O(n) can be accessed easily

We mostly prefer to use singly linked list for We can use a doubly linked list to
the execution of stacks. execute heaps and stacks, binary trees.

When we do not need to perform any In case of better implementation, while


searching operation and we want to save searching, we prefer to use doubly
memory, we prefer a singly linked list. linked list.

The doubly linked list consumes more


A singly linked list consumes less memory as
memory as compared to the singly
compared to the doubly linked list.
linked list.
Singly linked list (SLL) Doubly linked list (DLL)

Doubly linked list is more efficient.


When memory is not the problem and
we need better performance while
Singly linked list is less efficient. searching, we use doubly linked list.
It is preferred when we need to save memory
and searching is not required as pointer of
single index is ….
stored.
Given an expression string, write a python program to find whether a given
string has balanced parentheses or not.
Examples:
Input : {[]{()}}
Output : Balanced

Input : [{}{}(]
Output : Unbalanced
Approach #1: Using stack One approach to check balanced parentheses is to
use stack. Each time, when an open parentheses is encountered push it in the
stack, and when closed parenthesis is encountered, match it with the top of
stack and pop it. If stack is empty at the end, return Balanced otherwise,
Unbalanced.
# Python3 code to Check for
# balanced parentheses in an expression
open_list = ["[","{","("]
close_list = ["]","}",")"]

# Function to check parentheses


def check(myStr):
stack = []
for i in myStr:
if i in open_list:
stack.append(i)
elif i in close_list:
pos = close_list.index(i)
if ((len(stack) > 0) and
(open_list[pos] == stack[len(stack)-1])):
stack.pop()
else:
return "Unbalanced"
if len(stack) == 0:
return "Balanced"
else:
return "Unbalanced"

# Driver code
string = "{[]{()}}"
print(string,"-", check(string))

string = "[{}{})(]"
print(string,"-", check(string))

string = "((()"
print(string,"-",check(string))

Output:
{[]{()}} - Balanced
[{}{})(] - Unbalanced
((() - Unbalanced
Time Complexity: O(n), The time complexity of this algorithm is O(n), where n
is the length of the string. This is because we are iterating through the string
and performing constant time operations on the stack.
Auxiliary Space: O(n), The space complexity of this algorithm is O(n) as well,
since we are storing the contents of the string in a stack, which can grow up to
the size of the string.
Evaluation of postfix expression using stack in
Python
Unlike infix expression postfix expression don’t have any parenthesis it has only
two characters that are Operator And operand Using stack, we can easily evaluate
postfix expression there will be only two scenarios. We have to scan string from
left to right. If we encounter operator while we are scanning the string then we
will pop two elements from the stack and perform the operation of current
operator and we will push back the result into the stack. And if we encounter
operand just push that operand into the stack.

Steps for evaluating postfix expression.


 1. Accept postfix expression string in post variable.

 For i in post:

 If i is an operand:

 Push i in stack.

 Else:

 Pop two elements from stack e.g. X and Y

 Perform operation with current operator on both the parents i.e X i Y

 Push the result back into the stack.

 End for loop.


 3. Finally pop out the result from the stack.
Evaluate below postfix expression 234+*6-

Accept above string in variable exp.


for i in exp:

E
valuate postfix expression using stack in python
class evaluate_postfix:

def __init__(self):

self.items=[]

self.size=-1

def isEmpty(self):

return self.items==[]

def push(self,item):

self.items.append(item)

self.size+=1
def pop(self):

if self.isEmpty():

return 0

else:

self.size-=1

return self.items.pop()

def seek(self):

if self.isEmpty():

return False

else:

return self.items[self.size]

def evalute(self,expr):

for i in expr:

if i in '0123456789':

self.push(i)

else:

op1=self.pop()

op2=self.pop()

result=self.cal(op2,op1,i)

self.push(result)
return self.pop()

def cal(self,op2,op1,i):

if i is '*':

return int(op2)*int(op1)

elif i is '/':

return int(op2)/int(op1)

elif i is '+':

return int(op2)+int(op1)

elif i is '-':

return int(op2)-int(op1)

s=evaluate_postfix()

expr=input('enter the postfix expression')

value=s.evalute(expr)

print('the result of postfix expression',expr,'is',value)

OUTPUT
>>> =================== RESTART ===============================

enter the prefix expression++596^-64

the result of postfix expression ++596^-64 is 20


>>> ================= RESTART ================================

>>>

enter the prefix expression+-927

the result of postfix expression +-927 is 14

Recursion
it is a process in which a function calls itself directly or indirectly.
Advantages of using recursion
 A complicated function can be split down into smaller sub-problems utilizing
recursion.
 Sequence creation is simpler through recursion than utilizing any nested
iteration.
 Recursive functions render the code look simple and effective.
Disadvantages of using recursion
 A lot of memory and time is taken through recursive calls which makes it
expensive for use.
 Recursive functions are challenging to debug.
 The reasoning behind recursion can sometimes be tough to think through.
Syntax:
def func(): <--
|
| (recursive call)
|
func() ----

# Program to print the fibonacci series upto n_terms

# Recursive function

def recursive_fibonacci(n):

if n <= 1:

return n
else:

return(recursive_fibonacci(n-1) + recursive_fibonacci(n-2))

n_terms = 10

# check if the number of terms is valid

if n_terms <= 0:

print("Invalid input ! Please input a positive value")

else:

print("Fibonacci series:")

for i in range(n_terms):
print(recursive_fibonacci(i))

The factorial of 6 is denoted as 6! = 1*2*3*4*5*6 = 720.

# Program to print factorial of a number

# recursively.

# Recursive function
Program to print factorial of a number
# recursively.

# Recursive function
def recursive_factorial(n):
if n == 1:
return n
else:
return n * recursive_factorial(n-1)

# user input
num = 6
# check if the input is valid or not
if num < 0:
print("Invalid input ! Please enter a positive number.")
elif num == 0:
print("Factorial of number 0 is 1")
else:
print("Factorial of number", num, "=", recursive_factorial(num))
What is Tail-Recursion?
A unique type of recursion where the last procedure of a function is a recursive
call. The recursion may be automated away by performing the request in the
current stack frame and returning the output instead of generating a new stack
frame. The tail-recursion may be optimized by the compiler which makes it
better than non-tail recursive functions.
Is it possible to optimize a program by making use of a tail-recursive
function instead of non-tail recursive function?
Considering the function given below in order to calculate the factorial of n, we
can observe that the function looks like a tail-recursive at first but it is a non-tail-
recursive function. If we observe closely, we can see that the value returned by
Recur_facto(n-1) is used in Recur_facto(n), so the call to Recur_facto(n-1) is
not the last thing done by Recur_facto(n).

Program to calculate factorial of a number


# using a Non-Tail-Recursive function.

# non-tail recursive function


def Recur_facto(n):

if (n == 0):
return 1

return n * Recur_facto(n-1)

# print the result


print(Recur_facto(6))

Tower of Hanoi is a mathematical puzzle where we have three rods and n disks.
The objective of the puzzle is to move the entire stack to another rod, obeying
the following simple rules:
1) Only one disk can be moved at a time.
2) Each move consists of taking the upper disk from one of the stacks and
placing it on top of another stack i.e. a disk can only be moved if it is the
uppermost disk on a stack.
3) No disk may be placed on top of a smaller disk.
Note: Transferring the top n-1 disks from source rod to Auxiliary rod can again
be thought of as a fresh problem and can be solved in the same manner.

 Python3

# Recursive Python function to solve the tower of hanoi

def TowerOfHanoi(n , source, destination, auxiliary):

if n==1:
print ("Move disk 1 from source",source,"to destination",destination)

return

TowerOfHanoi(n-1, source, auxiliary, destination)

print ("Move disk",n,"from source",source,"to destination",destination)

TowerOfHanoi(n-1, auxiliary, destination, source)

# Driver code

n =4

TowerOfHanoi(n,'A','B','C')

# A, C, B are the name of rods

# Contributed By Dilip Jain

Output
Move disk 1 from source A to destination C
Move disk 2 from source A to destination B
Move disk 1 from source C to destination B
Move disk 3 from source A to destination C
Move disk 1 from source B to destination A
Move disk 2 from source B to destination C
Move disk 1 from source A to destination C
Move disk 4 from source A to destination B
Move disk 1 from source C to destination B
Move disk 2 from source C to destination A
Move disk 1 from source B to destination A
Move disk 3 from source C to destination B
Move disk 1 from source A to destination C
Move disk 2 from source A to destination B
Move disk 1 from source C to destination B
Time Complexity: O(2n)
Auxiliary Space: O(n)

the queue is a linear data structure that stores items in a First In First Out
(FIFO) manner. With a queue, the least recently added item is removed first. A
good example of a queue is any queue of consumers for a resource where the
consumer that came first is served first.

Operations associated with queue are:


 Enqueue: Adds an item to the queue. If the queue is full, then it is said to be
an Overflow condition – Time Complexity : O(1)
 Dequeue: Removes an item from the queue. The items are popped in the
same order in which they are pushed. If the queue is empty, then it is said to
be an Underflow condition – Time Complexity : O(1)
 Front: Get the front item from queue – Time Complexity : O(1)
 Rear: Get the last item from queue – Time Complexity : O(1)
Implement a Queue in Python
There are various ways to implement a queue in Python. This article covers the
implementation of queue using data structures and modules from Python
library. Python Queue can be implemented by the following ways:
 list
 collections.deque
 queue.Queue
Implementation using list
List is a Python’s built-in data structure that can be used as a queue. Instead of
enqueue() and dequeue(), append() and pop() function is used. However, lists
are quite slow for this purpose because inserting or deleting an element at the
beginning requires shifting all of the other elements by one, requiring O(n) time.
The code simulates a queue using a Python list. It adds elements ‘a’, ‘b’,
and ‘c’ to the queue and then dequeues them, resulting in an empty queue at
the end. The output shows the initial queue, elements dequeued (‘a’, ‘b’, ‘c’),
and the queue’s empty state.

Priority Queue in Python


A priority queue is a special type of queue in the data-structure. As the name suggest, it
sorts the elements and dequeues the elements based on their priorities.

Unlike normal queue, it retrieves the highest-priority element instead of the next
element. The priority of individual elements is decided by ordering applied to their keys.

Priority queues are most beneficial to handling scheduling problems where some tasks
will happen based on priorities.

For example - An operating system task is the best example of a priority queue - It
executes the high precedence over lower-priority tasks (downloading updates in the
background). The task scheduler can allow the highest-priority tasks to run first.

You might also like