0% found this document useful (0 votes)
6 views

Section 5

This document discusses essential algorithms in computer science, including searching algorithms (linear and binary search), sorting algorithms (bubble sort, quick sort, merge sort), and hashing techniques. It also covers algorithm design strategies like greedy algorithms, dynamic programming, and divide-and-conquer, as well as graph algorithms such as DFS, BFS, minimum spanning trees, and shortest path algorithms. Understanding the time and space complexity of these algorithms is crucial for optimizing computational efficiency.

Uploaded by

sinhakishan718
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Section 5

This document discusses essential algorithms in computer science, including searching algorithms (linear and binary search), sorting algorithms (bubble sort, quick sort, merge sort), and hashing techniques. It also covers algorithm design strategies like greedy algorithms, dynamic programming, and divide-and-conquer, as well as graph algorithms such as DFS, BFS, minimum spanning trees, and shortest path algorithms. Understanding the time and space complexity of these algorithms is crucial for optimizing computational efficiency.

Uploaded by

sinhakishan718
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Section 5:

Algorithms

1. Searching Algorithms

Searching algorithms are used to find an element in a data structure, such as an array or a list. The
most common searching algorithms are linear search and binary search.

• Linear Search:

o Description: In linear search, you check each element of the array or list sequentially
until you find the target element.

o Time Complexity: O(n), where n is the number of elements in the list.

o Example: Searching for a number in an unsorted array.

• Binary Search:

o Description: Binary search is an efficient algorithm for finding an element in a sorted


array by repeatedly dividing the search interval in half. If the target value is less than
the middle element, the search continues in the lower half, or else, it continues in
the upper half.

o Time Complexity: O(log n), where n is the number of elements in the array.

o Example: Searching for a number in a sorted array (e.g., using a divide-and-conquer


strategy).

2. Sorting Algorithms

Sorting algorithms arrange elements in a specific order, usually in ascending or descending order. The
main sorting algorithms are bubble sort, selection sort, insertion sort, merge sort, quick sort, and
heap sort.

• Bubble Sort:

o Description: Bubble sort works by repeatedly stepping through the list, comparing
adjacent elements, and swapping them if they are in the wrong order.

o Time Complexity: O(n^2), where n is the number of elements in the array.

o Space Complexity: O(1) (in-place sorting).

o Example: Sorting a small array like [3, 1, 4, 2].

• Quick Sort:

o Description: Quick sort selects a 'pivot' element and partitions the array into two
subarrays such that elements less than the pivot come before it, and elements
greater than the pivot come after it. It then recursively sorts the subarrays.

o Time Complexity: O(n log n) on average, O(n^2) in the worst case (if the pivot
selection is poor).
o Space Complexity: O(log n) (in-place sorting).

o Example: Sorting the array [10, 7, 8, 9, 1, 5].

• Merge Sort:

o Description: Merge sort divides the array into halves, recursively sorts each half, and
then merges the sorted halves back together.

o Time Complexity: O(n log n), where n is the number of elements.

o Space Complexity: O(n) (not in-place sorting).

o Example: Sorting a large array like [12, 11, 13, 5, 6, 7].

3. Hashing

Hashing is a technique used to store and retrieve data efficiently by mapping keys to hash codes,
which are then used to find the corresponding values.

• Hashing Function: A function that takes an input (or 'key') and returns a fixed-size string of
bytes, which typically is used to index an array or data structure (called a hash table).

o Example: A simple hash function for integers could be hash(x) = x mod table_size.

• Collision Handling: When two keys produce the same hash code, a collision occurs. There are
several methods to handle collisions:

o Chaining: Each hash table index points to a linked list of elements that hash to the
same index.

o Open Addressing: When a collision occurs, the algorithm probes for another empty
location in the table (e.g., linear probing, quadratic probing).

• Time Complexity:

o Average case: O(1) for search, insertion, and deletion.

o Worst case: O(n) if there are many collisions.

4. Asymptotic Time and Space Complexity

Asymptotic analysis helps us understand the efficiency of algorithms by measuring their time and
space requirements in the worst, best, and average cases. The most common asymptotic notations
are:

• Big O (O): Represents the upper bound of an algorithm’s time or space complexity, i.e., the
worst-case scenario.

o Example: O(n^2) means the algorithm's time complexity increases quadratically with
the input size.

• Big Omega (Ω): Represents the lower bound, i.e., the best-case scenario.
o Example: Ω(n) means the algorithm takes at least linear time in the best case.

• Big Theta (Θ): Represents the tight bound, i.e., both the upper and lower bounds.

o Example: Θ(n log n) means the algorithm’s time complexity grows asymptotically as n
log n.

5. Algorithm Design Techniques

There are several common strategies used to design efficient algorithms:

• Greedy Algorithms:

o Description: Greedy algorithms make a sequence of choices, each of which looks the
best at the moment. They aim to find an optimal solution by choosing the locally
optimal solution at each step.

o Example: The Activity Selection Problem is solved using a greedy approach, where
we select activities that finish the earliest, ensuring we have the maximum number
of activities.

• Dynamic Programming (DP):

o Description: DP is used to solve problems by breaking them into smaller overlapping


subproblems and storing the results of these subproblems to avoid redundant
calculations. It is ideal for optimization problems.

o Example: Fibonacci Sequence, where we store previously computed values to avoid


redundant calculations.

• Divide-and-Conquer:

o Description: Divide-and-conquer algorithms solve a problem by recursively breaking


it down into smaller subproblems, solving those subproblems independently, and
then combining their results.

o Example: Merge Sort and Quick Sort are divide-and-conquer algorithms. They
recursively divide the input and combine the results in the sorting process.

6. Graph Algorithms

Graph algorithms are used to solve problems related to graph structures. A graph consists of vertices
(nodes) and edges (connections between nodes).

• Graph Traversals:

o Depth-First Search (DFS): DFS explores as far as possible along a branch before
backtracking. It is implemented using a stack (or recursion).

▪ Example: DFS is used in solving puzzles like mazes, and in topological sorting.

▪ Time Complexity: O(V + E), where V is the number of vertices, and E is the
number of edges.
o Breadth-First Search (BFS): BFS explores all vertices at the present depth level before
moving on to the next level. It is implemented using a queue.

▪ Example: BFS is used in finding the shortest path in an unweighted graph.

▪ Time Complexity: O(V + E).

• Minimum Spanning Trees (MST):

o Description: An MST of a weighted graph is a subgraph that connects all the vertices
with the minimum total edge weight, without any cycles.

o Algorithms:

▪ Prim's Algorithm: Starts from an arbitrary vertex and grows the MST one
edge at a time by selecting the minimum weight edge that connects a vertex
in the MST to a vertex outside it.

▪ Kruskal’s Algorithm: Sorts all edges by weight and adds edges to the MST,
ensuring no cycles are formed.

o Time Complexity: O(E log V) for both Prim’s and Kruskal’s.

• Shortest Path Algorithms:

o Dijkstra's Algorithm: Finds the shortest path from a source vertex to all other
vertices in a graph with non-negative edge weights.

▪ Time Complexity: O(V^2) with simple arrays, O(E + V log V) with priority
queues.

▪ Example: Used in GPS navigation systems.

o Bellman-Ford Algorithm: Handles graphs with negative edge weights and can detect
negative weight cycles.

▪ Time Complexity: O(VE).

o Floyd-Warshall Algorithm: A dynamic programming algorithm for finding shortest


paths between all pairs of vertices in a graph.

▪ Time Complexity: O(V^3).

Summary

This section covers the key algorithms used in computer science, focusing on searching, sorting, and
hashing, as well as algorithm design techniques such as greedy algorithms, dynamic programming,
and divide-and-conquer. Additionally, graph algorithms like graph traversal (DFS, BFS), minimum
spanning trees, and shortest path algorithms (Dijkstra, Bellman-Ford, Floyd-Warshall) are
fundamental for solving graph-related problems. Understanding the time and space complexity of
these algorithms helps in optimizing solutions for efficiency.

You might also like