Section 5
Section 5
Algorithms
1. Searching Algorithms
Searching algorithms are used to find an element in a data structure, such as an array or a list. The
most common searching algorithms are linear search and binary search.
• Linear Search:
o Description: In linear search, you check each element of the array or list sequentially
until you find the target element.
• Binary Search:
o Time Complexity: O(log n), where n is the number of elements in the array.
2. Sorting Algorithms
Sorting algorithms arrange elements in a specific order, usually in ascending or descending order. The
main sorting algorithms are bubble sort, selection sort, insertion sort, merge sort, quick sort, and
heap sort.
• Bubble Sort:
o Description: Bubble sort works by repeatedly stepping through the list, comparing
adjacent elements, and swapping them if they are in the wrong order.
• Quick Sort:
o Description: Quick sort selects a 'pivot' element and partitions the array into two
subarrays such that elements less than the pivot come before it, and elements
greater than the pivot come after it. It then recursively sorts the subarrays.
o Time Complexity: O(n log n) on average, O(n^2) in the worst case (if the pivot
selection is poor).
o Space Complexity: O(log n) (in-place sorting).
• Merge Sort:
o Description: Merge sort divides the array into halves, recursively sorts each half, and
then merges the sorted halves back together.
3. Hashing
Hashing is a technique used to store and retrieve data efficiently by mapping keys to hash codes,
which are then used to find the corresponding values.
• Hashing Function: A function that takes an input (or 'key') and returns a fixed-size string of
bytes, which typically is used to index an array or data structure (called a hash table).
o Example: A simple hash function for integers could be hash(x) = x mod table_size.
• Collision Handling: When two keys produce the same hash code, a collision occurs. There are
several methods to handle collisions:
o Chaining: Each hash table index points to a linked list of elements that hash to the
same index.
o Open Addressing: When a collision occurs, the algorithm probes for another empty
location in the table (e.g., linear probing, quadratic probing).
• Time Complexity:
Asymptotic analysis helps us understand the efficiency of algorithms by measuring their time and
space requirements in the worst, best, and average cases. The most common asymptotic notations
are:
• Big O (O): Represents the upper bound of an algorithm’s time or space complexity, i.e., the
worst-case scenario.
o Example: O(n^2) means the algorithm's time complexity increases quadratically with
the input size.
• Big Omega (Ω): Represents the lower bound, i.e., the best-case scenario.
o Example: Ω(n) means the algorithm takes at least linear time in the best case.
• Big Theta (Θ): Represents the tight bound, i.e., both the upper and lower bounds.
o Example: Θ(n log n) means the algorithm’s time complexity grows asymptotically as n
log n.
• Greedy Algorithms:
o Description: Greedy algorithms make a sequence of choices, each of which looks the
best at the moment. They aim to find an optimal solution by choosing the locally
optimal solution at each step.
o Example: The Activity Selection Problem is solved using a greedy approach, where
we select activities that finish the earliest, ensuring we have the maximum number
of activities.
• Divide-and-Conquer:
o Example: Merge Sort and Quick Sort are divide-and-conquer algorithms. They
recursively divide the input and combine the results in the sorting process.
6. Graph Algorithms
Graph algorithms are used to solve problems related to graph structures. A graph consists of vertices
(nodes) and edges (connections between nodes).
• Graph Traversals:
o Depth-First Search (DFS): DFS explores as far as possible along a branch before
backtracking. It is implemented using a stack (or recursion).
▪ Example: DFS is used in solving puzzles like mazes, and in topological sorting.
▪ Time Complexity: O(V + E), where V is the number of vertices, and E is the
number of edges.
o Breadth-First Search (BFS): BFS explores all vertices at the present depth level before
moving on to the next level. It is implemented using a queue.
o Description: An MST of a weighted graph is a subgraph that connects all the vertices
with the minimum total edge weight, without any cycles.
o Algorithms:
▪ Prim's Algorithm: Starts from an arbitrary vertex and grows the MST one
edge at a time by selecting the minimum weight edge that connects a vertex
in the MST to a vertex outside it.
▪ Kruskal’s Algorithm: Sorts all edges by weight and adds edges to the MST,
ensuring no cycles are formed.
o Dijkstra's Algorithm: Finds the shortest path from a source vertex to all other
vertices in a graph with non-negative edge weights.
▪ Time Complexity: O(V^2) with simple arrays, O(E + V log V) with priority
queues.
o Bellman-Ford Algorithm: Handles graphs with negative edge weights and can detect
negative weight cycles.
Summary
This section covers the key algorithms used in computer science, focusing on searching, sorting, and
hashing, as well as algorithm design techniques such as greedy algorithms, dynamic programming,
and divide-and-conquer. Additionally, graph algorithms like graph traversal (DFS, BFS), minimum
spanning trees, and shortest path algorithms (Dijkstra, Bellman-Ford, Floyd-Warshall) are
fundamental for solving graph-related problems. Understanding the time and space complexity of
these algorithms helps in optimizing solutions for efficiency.