0% found this document useful (0 votes)
61 views30 pages

Data Structures and Algorithms

The document discusses data structures and algorithms for interview preparation in Python. It covers Big O notation and common time and space complexities, as well as data structures like linked lists, doubly linked lists, stacks, queues, trees, hash tables, and graphs. For linked lists, it provides code examples for creating nodes and linked lists, and methods for operations like appending, popping, getting, setting, inserting, and removing nodes. Similarly, it demonstrates the basics of doubly linked lists.

Uploaded by

Gourav Kirti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
61 views30 pages

Data Structures and Algorithms

The document discusses data structures and algorithms for interview preparation in Python. It covers Big O notation and common time and space complexities, as well as data structures like linked lists, doubly linked lists, stacks, queues, trees, hash tables, and graphs. For linked lists, it provides code examples for creating nodes and linked lists, and methods for operations like appending, popping, getting, setting, inserting, and removing nodes. Similarly, it demonstrates the basics of doubly linked lists.

Uploaded by

Gourav Kirti
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 30

DATA STRUCTURES AND ALGORITHMS:

INTERVIEW PREPARATION (PYTHON)

February 19, 2022

1
Contents
1 BIG O NOTATION 3

2 DATA STRUCTURES: LINKED LISTS 7

3 DATA STRUCTURES: DOUBLY LINKED LISTS 11

4 DATA STRUCTURES: STACKS/QUEUES 14

5 DATA STRUCTURES: TREES 16

6 DATA STRUCTURES: HASH TABLES 18

7 DATA STRUCTURES: GRAPHS 19

8 ALGORITHMS: RECURSIONS 21

9 ALGORITHMS: SORTING 23

2
1 BIG O NOTATION
Big O Notation is a mathematical function used in computer science to describe an
algorithm’s complexity. It is usually a measure of the runtime required for an algo-
rithm’s execution.

The time complexity of an algorithm quantifies the amount of time taken by an algo-
rithm to run as a function of the length of the input. Similarly, the Space complexity
of an algorithm quantifies the amount of space or memory taken by an algorithm to
run as a function of the length of the input.

What Causes Time in a Function?

1. Operations
2. Comparisons
3. Loops
4. Outside Function Calls

What Causes Space Complexity?

1. Variables
2. Data Structures
3. Function Calls
4. Allocations

Rule Book:

1. Always worst case


2. Remove constants
3. Different inputs should have different variables O(a+b), A and B arrays nested
would be O(a*b)
4. Drop non-dominant terms

Algorithm Performance — Big-O Notation

O(1): “O of 1” or Constant Time. It means that the operation in question doesn’t


depend on the number of elements in the given data set. No loops.

O(log n): Called “O log n” or Logarithmic Time. As the number of items in the
sorted array grows, it only takes a logarithmic time relationship to find any given
item. Usually searching algorithms have log n if they are sorted.

O(n): Called “O of n” or Linear Time. As more items are added to the array in
an unsorted fashion, it takes a corresponding linear amount of time to perform a
search. For loops, while loop through n items.

O(n log(n) ): Log linear. Usually sorting operations.

3
O(n2 ):Called “O of n squared” or Quadratic Time. As the number of items in the
data set increases, the time it takes to process them increases at the square of that
number. Every element in a collection needs to be compared to every other element.
Two nested loops.

O(2n ): Exponential. Recursive algorithms that solve a problem of size n.

O(n!): Factorial. You are adding a loop for every element.

Figure 1: This is an image from a text that explains the BIG O for data structures.

4
Figure 2: This is an image from a text that explains the BIG O for sorting algorithms.

5
Figure 3: This is an image from a text that uses a graph to show complexities.

6
2 DATA STRUCTURES: LINKED LISTS
A linear collection of elements ordered by links instead of physical placement in mem-
ory.

Each element is called a node:

- The first node is called the head.

- The last node is called the tail.

Nodes are sequential. Each node stores a reference (pointer) to one or


more adjacent nodes:

- In a singly linked list, each node stores a reference to the next node.

- In a doubly linked list, each node stores references to both the next and
the previous nodes. This enables traversing a list backwards.

- In a circularly linked list, the tail stores a reference to the head.

Create a New Node:

class Node:
def __init__(self, value):
self.value = value
self.next = None

Linked List:

class LinkedList:
def __init__(self, value):
new_node = Node(value)
self.head = new_node
self.tail = new_node
self.length = 1

Append an Item to the End of the Linked List:

def append(self, value):


new_node = Node(value)
if self.length == 0:
self.head = new_node
self.tail = new_node
else:
self.tail.next = new_node
self.tail = new_node
self.length += 1
return True

7
Pop the Last Item in the Linked List:

def pop(self):
if self.length == 0:
return None
temp = self.head
pre = self.head
while(temp.next):
pre = temp
temp = temp.next
self.tail = pre
self.tail.next = None
self.length -= 1
if self.length == 0:
self.head = None
self.tail = None
return temp

Prepend an Item in the Linked List:

def prepend(self, value):


new_node = Node(value)
if self.length == 0:
self.head = new_node
self.tail = new_node
else:
new_node.next = self.head
self.head = new_node
self.length += 1
return True

Pop the First Item in the Linked List:

def pop_first(self):
if self.length == 0:
return None
temp = self.head
self.head = self.head.next
temp.next = None
self.length -= 1
if self.length == 0:
self.tail = None
return temp

8
Get Item in the Linked List:

def get(self, index):


if index < 0 or index >= self.length:
return None
temp = self.head
for _ in range(index):
temp = temp.next
return temp

Set Value of Item in the Linked List:

def set_value(self, index, value):


temp = self.get(index)
if temp:
temp.value = value
return True
return False

Insert Item in the Linked List:

def insert(self, index, value):


if index < 0 or index > self.length:
return False
if index == 0:
return self.prepend(value)
if index == self.length:
return self.append(value)
new_node = Node(value)
temp = self.get(index - 1)
new_node.next = temp.next
temp.next = new_node
self.length += 1
return True

Remove Item in the Linked List:

def remove(self, index):


if index < 0 or index >= self.length:
return None
if index == 0:
return self.pop_first()
if index == self.length - 1:
return self.pop()
pre = self.get(index - 1)
temp = pre.next
pre.next = temp.next
temp.next = None
self.length -= 1
return temp

9
Reverse the Linked List:

def reverse(self):
temp = self.head
self.head = self.tail
self.tail = temp
after = temp.next
before = None
for _ in range(self.length):
after = temp.next
temp.next = before
before = temp
temp = afterl

10
3 DATA STRUCTURES: DOUBLY LINKED LISTS
A doubly linked list is a type of linked list in which each node apart from storing its
data has two links. The first link points to the previous node in the list and the second
link points to the next node in the list. The first node of the list has its previous link
pointing to NULL similarly the last node of the list has its next node pointing to NULL.

Create a New Node:


class Node:
def __init__(self, value):
self.value = value
self.next = None
self.prev = None
Doubly Linked List:
class DoublyLinkedList:
def __init__(self, value):
new_node = Node(value)
self.head = new_node
self.tail = new_node
self.length = 1
Append an Item to the End of the Linked List:
def append(self, value):
new_node = Node(value)
if self.head is None:
self.head = new_node
self.tail = new_node
else:
self.tail.next = new_node
new_node.prev = self.tail
self.tail = new_node
self.length += 1
return True
Pop the Last Item in the Linked List:
def pop(self):
if self.length == 0:
return None
temp = self.tail
if self.length == 1:
self.head = None
self.tail = None
else:
self.tail = self.tail.prev
self.tail.next = None
temp.prev = None
self.length -= 1
return temp
11
Prepend an Item in the Linked List:

def prepend(self, value):


new_node = Node(value)
if self.length == 0:
self.head = new_node
self.tail = new_node
else:
new_node.next = self.head
self.head.prev = new_node
self.head = new_node
self.length += 1
return True

Pop the First Item in the Linked List:

def pop_first(self):
if self.length == 0:
return None
temp = self.head
if self.length == 1:
self.head = None
self.tail = None
else:
self.head = self.head.next
self.head.prev = None
temp.next = None
self.length -= 1
return temp

Get Item in the Linked List:

def get(self, index):


if index < 0 or index >= self.length:
return None
temp = self.head
if index < self.length/2:
for _ in range(index):
temp = temp.next
else:
temp = self.tail
for _ in range(self.length - 1, index, -1):
temp = temp.prev
return temp

12
Set Value of Item in the Linked List:
def set_value(self, index, value):
temp = self.get(index)
if temp:
temp.value = value
return True
return False
Insert Item in the Linked List:
def insert(self, index, value):
if index < 0 or index > self.length:
return False
if index == 0:
return self.prepend(value)
if index == self.length:
return self.append(value)

new_node = Node(value)
before = self.get(index - 1)
after = before.next

new_node.prev = before
new_node.next = after
before.next = new_node
after.prev = new_node

self.length += 1
return True
Remove Item in the Linked List:
def remove(self, index):
if index < 0 or index >= self.length:
return None
if index == 0:
return self.pop_first()
if index == self.length - 1:
return self.pop()

temp = self.get(index)

temp.next.prev = temp.prev
temp.prev.next = temp.next
temp.next = None
temp.prev = None

self.length -= 1
return temp

13
4 DATA STRUCTURES: STACKS/QUEUES
Stacks, like the name suggests, follow the Last-in-First-Out (LIFO) principle. As if
stacking coins one on top of the other, the last coin we put on the top is the one that
is the first to be removed from the stack late

Queues, like the name suggests, follow the First-in-First-Out (FIFO) principle. As if
waiting in a queue for the movie tickets, the first one to stand in line is the first one
to buy a ticket and enjoy the movie.

Create a Node:

class Node:
def __init__(self, value):
self.value = value
self.next = None

Class Stack:

class Stack:
def __init__(self, value):
new_node = Node(value)
self.top = new_node
self.height = 1

Adds an Element to the Top of the Stack:

def push(self, value):


new_node = Node(value)
if self.height == 0:
self.top = new_node
else:
new_node.next = self.top
self.top = new_node
self.height += 1
return True

Removes an Element to the Top of the Stack::

def pop(self):
if self.height == 0:
return None
temp = self.top
self.top = self.top.next
temp.next = None
self.height -= 1
return temp

14
Create a Node:

class Node:
def __init__(self, value):
self.value = value
self.next = None

Class Queue:

class Queue:
def __init__(self, value):
new_node = Node(value)
self.first = new_node
self.last = new_node
self.length = 1

Adds an Element to the End of the Queue:

def enqueue(self, value):


new_node = Node(value)
if self.first is None:
self.first = new_node
self.last = new_node
else:
self.last.next = new_node
self.last = new_node
self.length += 1
return True

Removes an Element at the Beginning of the Queue

def dequeue(self):
if self.length == 0:
return None
temp = self.first
if self.length == 1:
self.first = None
self.last = None
else:
self.first = self.first.next
temp.next = None
self.length -= 1
return temp

15
5 DATA STRUCTURES: TREES
A Tree, is a combination of nodes (also known as vertices) and edges. A tree can
have any number of nodes and edges. A node is where we store the data, and an edge
is a path between 2 nodes. There are various types of trees available like a binary tree,
ternary tree, binary search tree, AVL tree, etc.

Types of Nodes in a Tree:

Parent Node: A node that has one or more children.


Child Node: A node that has a parent node.
Leaf Node: A node that doesn’t have any childrem.

Create a Node:

class Node:
def __init__(self, value):
self.value = value
self.left = None
self.right = None

Class Binary Search Tree:

class BinarySearchTree:
def __init__(self):
self.root = None

Inserts a Node into the Binary Tree:

def insert(self, value):


new_node = Node(value)
if self.root is None:
self.root = new_node
return True
temp = self.root
while (True):
if new_node.value == temp.value:
return False
if new_node.value < temp.value:
if temp.left is None:
temp.left = new_node
return True
temp = temp.left
else:
if temp.right is None:
temp.right = new_node
return True
temp = temp.right

16
Checks if a Node is in the Binary Tree:

def contains(self, value):


temp = self.root
while (temp is not None):
if value < temp.value:
temp = temp.left
elif value > temp.value:
temp = temp.right
else:
return True
return False
return temp

17
6 DATA STRUCTURES: HASH TABLES
Hash tables are a type of data structure in which the address or the index value of
the data element is generated from a hash function. That makes accessing the data
faster as the index value behaves as a key for the data value. In other words Hash
table stores key-value pairs but the key is generated through a hashing function.

Constructor:
class HashTable:
def __init__(self, size = 7):
self.data_map = [None] * size

def __hash(self, key):


my_hash = 0
for letter in key:
my_hash = (my_hash + ord(letter) * 23) % len(self.data_map)
return my_hash
Print the Hash Table:
def print_table(self):
for i, val in enumerate(self.data_map):
print(i, ": ", val)
Set a Key Value Pair in the Hash Table
def set_item(self, key, value):
index = self.__hash(key)
if self.data_map[index] == None:
self.data_map[index] = []
self.data_map[index].append([key, value])
Gets a Key Value Pair in the Hash Table
def get_item(self, key):
index = self.__hash(key)
if self.data_map[index] is not None:
for i in range(len(self.data_map[index])):
if self.data_map[index][i][0] == key:
return self.data_map[index][i][1]
return None
Take All Keys Out of the Hashtable, Put Them in a List and Return the
List
def keys(self):
all_keys = []
for i in range(len(self.data_map)):
if self.data_map[i] is not None:
for j in range(len(self.data_map[i])):
all_keys.append(self.data_map[i][j][0])
return all_keys

18
7 DATA STRUCTURES: GRAPHS
Graphs are non-linear data structures made up of two major components:

Vertices– Vertices are entities in a graph. Every vertex has a value associated with it.
For example, if we represent a list of cities using a graph, the vertices would represent
the cities.

Edges – Edges represent the relationship between the vertices in the graph. Edges
may or may not have a value associated with them. For example, if we represent a list
of cities using a graph, the edges would represent the path between the cities.

Figure 4: This is an image from a text that explains thegraphs edges and vertices.

19
Constructor:
class Graph:
def __init__(self):
self.adj_list = {}
Print the Graph:
def print_graph(self):
for vertex in self.adj_list:
print(vertex, ':', self.adj_list[vertex])
Add a Vertex in the Graph
def add_vertex(self, vertex):
if vertex not in self.adj_list.keys():
self.adj_list[vertex] = []
return True
return False
self.data_map[index].append([key, value])
Add an Edge in the Graph
def add_edge(self, v1, v2):
if v1 in self.adj_list.keys() and v2 in self.adj_list.keys():
self.adj_list[v1].append(v2)
self.adj_list[v2].append(v1)
return True
return False
Remove a Vertex in the Graph
def remove_vertex(self, vertex):
if vertex in self.adj_list.keys():
for other_vertex in self.adj_list[vertex]:
self.adj_list[other_vertex].remove(vertex)
del self.adj_list[vertex]
return True
return False
Remove an Edge in the Graph
def remove_edge(self, v1, v2):
if v1 in self.adj_list.keys() and v2 in self.adj_list.keys():
try:
self.adj_list[v1].remove(v2)
self.adj_list[v2].remove(v1)
except ValueError:
pass
return True
return False

20
8 ALGORITHMS: RECURSIONS
The term Recursion can be defined as the process of defining something in terms
of itself. In simple words, it is a process in which a function calls itself ... until it doesn’t

Advantages of using recursion

1. A complicated function can be split down into smaller sub-problems utilizing


recursion.
2. Sequence creation is simpler through recursion than utilizing any nested iteration.
3. R ecursive functions render the code look simple and effective.

Disadvantages of using recursion

1. A lot of memory and time is taken through recursive calls which makes it
expensive for use.
2. Recursive functions are challenging to debug.
3. The reasoning behind recursion can sometimes be tough to think through.

Example of a Recursive Function:

def factorial(x):
# This is a recursive function to find the factorial of an integer

if x == 1:
return 1
else:
return (x * factorial(x-1))
num = 3
print("The factorial of", num, "is", factorial(num))

OUTPUT: The factorial of 3 is 6

When we call this function with a positive integer, it will recursively call itself by
decreasing the number Each function multiplies the number with the factorial of the
number below it until it is equal to one. ’

This recursive call can be explained in the following steps.

factorial(3) # 1st call with 3


3 * factorial(2) # 2nd call with 2
3 * 2 * factorial(1) # 3rd call with 1
3 * 2 * 1 # return from 3rd call as number=1
3 * 2 # return from 2nd call
6 # return from 1st call

21
Figure 5: This image shows a step-by-step process of what is going on: Our recursion
ends when the number reduces to 1. This is called the base condition. Every recursive
function must have a base condition that stops the recursion or else the function calls
itself infinitely.

22
9 ALGORITHMS: SORTING
A Sorting Algorithm is used to rearrange a given array or list elements according
to a comparison operator on the elements. The comparison operator is used to decide
the new order of element in the respective data structure.

Since sorting can often reduce the complexity of a problem, it is an important al-
gorithm in Computer Science. These algorithms have direct applications in searching
algorithms, database algorithms, divide and conquer methods, data structure algo-
rithms, and many more.

BUBBLE SORT:

Just like the way bubbles rise from the bottom of a glass, bubble sort is a simple
algorithm that sorts a list, allowing either lower or higher values to bubble up to the
top. The algorithm traverses a list and compares adjacent values, swapping them if
they are not in the correct order.

With a worst-case complexity of O(n2 ), bubble sort is very slow compared to other
sorting algorithms like quicksort. The upside is that it is one of the easiest sorting
algorithms to understand and code from scratch.

From technical perspective, bubble sort is reasonable for sorting small-sized arrays
or specially when executing sort algorithms on computers with remarkably limited
memory resources.

def bubble_sort(my_list):
for i in range(len(my_list) - 1, 0 ,-1):
for j in range(i):
if my_list[j] > my_list[j+1]:
temp = my_list[j]
my_list[j] = my_list[j+1]
my_list[j+1] = temp
return my_list

print(bubble_sort([4,2,6,5,1,3]))

SELECTION SORT:

Selection Sort is one of the simplest sorting algorithms. This algorithm gets its name
from the way it iterates through the array: it selects the current smallest element, and
swaps it into place.
1. Find the smallest element in the array and swap it with the first element.
2. Find the second smallest element and swap with with the second element in the
array.
3. Find the third smallest element and swap wit with the third element in the array.
4. Repeat the process of finding the next smallest element and swapping it into the
correct position until the entire array is sorted.
23
def selection_sort(my_list):
for i in range(len(my_list)-1):
min_index = i
for j in range(i+1, len(my_list)):
if my_list[j] < my_list[min_index]:
min_index = j
if i != min_index:
temp = my_list[i]
my_list[i] = my_list[min_index]
my_list[min_index] = temp
return my_list

print(selection_sort([4,2,6,5,1,3]))

INSERTION SORT:

Insertion sort is a sorting algorithm that places an unsorted element at its suitable
place in each iteration. Insertion sort works similarly as we sort cards in our hand
in a card game. We assume that the first card is already sorted then, we select an
unsorted card. If the unsorted card is greater than the card in hand, it is placed on
the right otherwise, to the left. In the same way, other unsorted cards are taken and
put in their right place.

def insertion_sort(my_list):
for i in range(1, len(my_list)):
temp = my_list[i]
j = i-1
while temp < my_list[j] and j > -1:
my_list[j+1] = my_list[j]
my_list[j] = temp
j -= 1
return my_list

print(insertion_sort([4,2,6,5,1,3]))

MERGE SORT:

Merge Sort is a Divide and Conquer algorithm. It divides input array in two halves,
calls itself for the two halves and then merges the two sorted halves. The major por-
tion of the algorithm is given two sorted arrays, and we have to merge them into a
single sorted array.
1. Divide the array into two halves.
2. Sort the left half and the right half using the same recurring algorithm.
3. Merge the sorted halves.

24
def merge(array1, array2):
combined = []
i = 0
j = 0
while i < len(array1) and j < len(array2):
if array1[i] < array2[j]:
combined.append(array1[i])
i += 1
else:
combined.append(array2[j])
j += 1

while i < len(array1):


combined.append(array1[i])
i += 1

while j < len(array2):


combined.append(array2[j])
j += 1

return combined

def merge_sort(my_list):
if len(my_list) == 1:
return my_list
mid = int(len(my_list)/2)
left = my_list[:mid]
right = my_list[mid:]
return merge(merge_sort(left), merge_sort(right))

print(merge_sort([3,1,4,2]))

QUICK SORT:

Quick sort is an efficient divide and conquer sorting algorithm. Average case time
complexity of Quick Sort is O(nlog(n)) with worst case time complexity being O(n2 )
depending on the selection of the pivot element, which divides the current array into
two sub arrays.

For instance, the time complexity of Quick Sort is approximately O(nlog(n)) when
the selection of pivot divides original array into two nearly equal sized sub arrays.

On the other hand, if the algorithm, which selects of pivot element of the input
arrays, consistently outputs 2 sub arrays with a large difference in terms of array sizes,
quick sort algorithm can achieve the worst case time complexity of O(n2 ).
25
1. Choose an element to serve as a pivot, in this case, the last element of the array is
the pivot.
2. Partitioning: Sort the array in such a manner that all elements less than the pivot
are to the left, and all elements greater than the pivot are to the right
3. Call Quicksort recursively, taking into account the previous pivot to properly sub-
divide the left and right arrays. (A more detailed explanation can be found in the
comments below)

def swap(my_list, index1, index2):


temp = my_list[index1]
my_list[index1] = my_list[index2]
my_list[index2] = temp

def pivot(my_list, pivot_index, end_index):


swap_index = pivot_index

for i in range(pivot_index+1, end_index+1):


if my_list[i] < my_list[pivot_index]:
swap_index += 1
swap(my_list, swap_index, i)
swap(my_list, pivot_index, swap_index)
return swap_index

def quick_sort_helper(my_list, left, right):


if left < right:
pivot_index = pivot(my_list, left, right)
quick_sort_helper(my_list, left, pivot_index-1)
quick_sort_helper(my_list, pivot_index+1, right)
return my_list

def quick_sort(my_list):
return quick_sort_helper(my_list, 0, len(my_list)-1)

print(quick_sort([4,6,1,7,3,2,5]))

TREE TRAVERSAL:

Traversal is a process to visit all the nodes of a tree and may print their values too.
Because, all nodes are connected via edges (links) we always start from the root (head)
node. That is, we cannot randomly access a node in a tree.

There are three ways which we use to traverse a tree

1. In-order Traversal
2. Pre-order Traversal
3. Post-order Traversal
26
Create a Node:

class Node:
def __init__(self, value):
self.value = value
self.left = None
self.right = None

Class Binary Search Tree:

class BinarySearchTree:
def __init__(self):
self.root = None

Inserts a Node into the Binary Tree:

def insert(self, value):


new_node = Node(value)
if self.root is None:
self.root = new_node
return True
temp = self.root
while (True):
if new_node.value == temp.value:
return False
if new_node.value < temp.value:
if temp.left is None:
temp.left = new_node
return True
temp = temp.left
else:
if temp.right is None:
temp.right = new_node
return True
temp = temp.right

27
Checks if a Node is in the Binary Tree:

def contains(self, value):


if self.root is None:
return False
temp = self.root
while (temp):
if value < temp.value:
temp = temp.left
elif value > temp.value:
temp = temp.right
else:
return True
return False

BREADTH FIRST SEARCH:

BFS is a traversing algorithm where you should start traversing from a selected node
(source or starting node) and traverse the graph layerwise thus exploring the neigh-
bour nodes (nodes which are directly connected to source node). You must then move
towards the next-level neighbour nodes.

As the name BFS suggests, you are required to traverse the graph breadthwise as
follows:

1. First move horizontally and visit all the nodes of the current layer
2. Move to the next layer

def BFS(self):
current_node = self.root
queue = []
results = []
queue.append(current_node)

while len(queue) > 0:


current_node = queue.pop(0)
results.append(current_node.value)
if current_node.left is not None:
queue.append(current_node.left)
if current_node.right is not None:
queue.append(current_node.right)
return results

28
PREORDER TRAVERSAL:

In preorder traversal, we process the root node, then the left subtree of the root
node, and at last process the right subtree of the root node.

def dfs_pre_order(self):
results = []

def traverse(current_node):
results.append(current_node.value)
if current_node.left is not None:
traverse(current_node.left)
if current_node.right is not None:
traverse(current_node.right)

traverse(self.root)
return results

POST-ORDER TRAVERSAL:

In post-order traversal, we first process the left subtree, then the right subtree, and at
last, the root node.

def dfs_post_order(self):
results = []
def traverse(current_node):
if current_node.left is not None:
traverse(current_node.left)
if current_node.right is not None:
traverse(current_node.right)
results.append(current_node.value)
traverse(self.root)
return results

IN-ORDER TRAVERSAL:

In inorder traversal, we first process the left subtree of the root node, then process the
root node, and at last process the right subtree of the root node.

def dfs_in_order(self):
results = []
def traverse(current_node):
if current_node.left is not None:
traverse(current_node.left)
results.append(current_node.value)
if current_node.right is not None:
traverse(current_node.right)
traverse(self.root)
return results

29
Figure 6: This image shows a step-by-step process of different traversals

30

You might also like