8-Queens Backtracking Algorithm
Problem Statement:
Place 8 queens on an 8×8 chessboard so that no two queens attack each other, i.e.,
no two queens share the same row, column, or diagonal.
Algorithm Outline:
We will:
1. Place one queen per row.
2. For each row, try all columns.
3. If placing a queen in a column causes no conflict, move to the next row.
4. If all rows are filled, a solution is found.
5. If no column works for a row, backtrack to the previous row and try the next column.
Pseudocode
NQueens(row, N)
if row > N then
print the solution (positions of queens)
return
for column ← 1 to N do
if isSafe(row, column) then
board[row] ← column // place queen at (row, column)
NQueens(row + 1, N) // move to next row
board[row] ← 0 // backtrack (remove the queen)
Safety Checking Function
isSafe(row, column)
for prevRow ← 1 to row - 1 do
prevCol ← board[prevRow]
// Check if in the same column
if prevCol = column then
return false
// Check if in the same diagonal
if |prevRow - row| = |prevCol - column| then
return false
return true // no conflict, safe to place queen
Explanation of Variables:
Variable Meaning
N Total number of queens (for 8-Queens, N = 8)
row Current row where we are trying to place a queen
column The column being tested in the current row
board[row] Stores the column position of the queen placed in that row
Example (Partial Trace for N = 4):
Step Action Board State (row → column)
1 Place at (1,1) [1, _, _, _]
2 Try (2,1) → conflict (same column) —
3 Try (2,2) → conflict (diagonal) —
4 Try (2,3) → safe → place [1, 3, _, _]
5 Continue with row 3, backtrack if needed... ...
When N = 8:
This recursive process prints all 92 possible arrangements of 8 queens satisfying the rules.
Time Complexity:
• O(N!) in the worst case (since each row tries N columns)
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Sum of Subsets Problem
Given a set of positive integers and a target sum (targetSum),
find all possible subsets of the set whose elements add up exactly to the target sum.
Example:
Set = {10, 20, 30, 40}, targetSum = 60
Valid subsets: {20, 40}, {10, 20, 30}
Core Idea (Backtracking Approach)
We build subsets step by step, adding one element at a time:
• If adding the current element keeps the sum ≤ targetSum → include it and explore further.
• If sum exceeds targetSum → backtrack (remove last element and try next possibility).
• If sum equals targetSum → record the subset as a solution.
Pseudocode
SumOfSubsets(index, currentSum, targetSum, set, subset, N)
if currentSum = targetSum then
print subset // Found a valid subset
return
if index > N or currentSum > targetSum then
return // No further exploration possible
// Include current element
[Link](set[index])
SumOfSubsets(index + 1, currentSum + set[index], targetSum, set, subset, N)
// Exclude current element (Backtrack)
[Link](set[index])
SumOfSubsets(index + 1, currentSum, targetSum, set, subset, N)
Explanation of Variables
Variable Description
set[] Array of elements
N Number of elements in the set
targetSum Desired total sum
index Current position in the array (starts from 1 or 0)
currentSum Sum of elements included so far
subset[] Temporary array storing current chosen elements
Example Dry Run
Input:
set = {10, 20, 30, 40}
targetSum = 60
Step 1: Start with index = 1, currentSum = 0
Step 2: Include 10 → currentSum = 10
Step 3: Include 20 → currentSum = 30
Step 4: Include 30 → currentSum = 60 → print {10, 20, 30}
Step 5: Backtrack → remove 30 → try next element (40)
→ 10 + 20 + 40 = 70 → backtrack again
Step 6: Include 20, 40 → currentSum = 60 → print {20, 40}
Time Complexity
• O(2ⁿ) in the worst case, since each element has two choices (include or exclude).
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Knapsack Problem using Backtracking
Problem Statement:
Given:
• A set of n items, each with a weight and a profit,
• A knapsack with maximum capacity (W).
Find the maximum profit that can be obtained by selecting a subset of items
such that the total weight ≤ W.
Example:
Item Profit (P) Weight (W)
1 20 2
2 30 5
3 35 7
4 12 3
Knapsack Capacity = 10
Best selection → Items {1, 2, 4} → Total Profit = 62
Backtracking Concept
At each step (each item), we have two choices:
1. Include the item (if it fits in the remaining capacity).
2. Exclude the item and move to the next one.
We explore both possibilities recursively, and track the maximum profit obtained.
Pseudocode
Knapsack(index, currentWeight, currentProfit)
if currentWeight ≤ maxWeight and currentProfit > maxProfit then
maxProfit ← currentProfit // Update best profit found so far
if index > n then
return // No more items to consider
// Branch 1: Include the current item (if it fits)
if currentWeight + weight[index] ≤ maxWeight then
Knapsack(index + 1,
currentWeight + weight[index],
currentProfit + profit[index])
// Branch 2: Exclude the current item
Knapsack(index + 1, currentWeight, currentProfit)
Explanation of Variables
Variable Meaning
n Total number of items
profit[] Profit of each item
weight[] Weight of each item
maxWeight Capacity of the knapsack
index Current item being considered
currentProfit Profit accumulated so far
currentWeight Weight accumulated so far
maxProfit Best (maximum) profit found so far
Example Dry Run
Input:
n=4
profit[] = {20, 30, 35, 12}
weight[] = {2, 5, 7, 3}
maxWeight = 10
Step-by-step exploration:
1. Start at item 1, weight = 0, profit = 0
2. Include item 1 → weight = 2, profit = 20
3. Include item 2 → weight = 7, profit = 50
4. Exclude item 3 (since adding it exceeds 10) → try item 4
→ weight = 10, profit = 62
5. Backtrack → try other combinations (exclude item 1 or 2)
6. Keep track of maxProfit = 62.
Time Complexity
• O(2ⁿ) (each item is either included or excluded).
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Hamiltonian Cycle Problem
Given a connected undirected graph, the Hamiltonian Cycle problem asks:
Can we find a closed path that:
• Visits every vertex exactly once, and
• Returns to the starting vertex?
If such a path exists → it’s called a Hamiltonian Cycle.
Example:
Graph:
Vertices = {1, 2, 3, 4}
Edges = {(1,2), (2,3), (3,4), (4,1), (1,3)}
One Hamiltonian Cycle: 1 → 2 → 3 → 4 → 1
Concept (Backtracking Idea)
We build a path starting from vertex 1:
• Add vertices one by one to the path.
• At each step, check if the new vertex is adjacent to the previous vertex and not already visited.
• If at any point no vertex can be added → backtrack (remove last vertex and try a new one).
• If all vertices are added and the last one connects back to the first → Hamiltonian Cycle found.
Pseudocode
HamiltonianCycle(vertex)
if vertex > n then
if graph[path[n]][path[1]] = 1 then
print path[1...n, path[1]] // A valid Hamiltonian Cycle
return
for nextVertex ← 2 to n do
if isSafe(nextVertex, vertex) then
path[vertex] ← nextVertex
HamiltonianCycle(vertex + 1)
path[vertex] ← 0 // Backtrack
Safety Check Function
isSafe(nextVertex, vertex)
// Check if there is an edge between last vertex and next vertex
if graph[path[vertex - 1]][nextVertex] = 0 then
return false
// Check if nextVertex already exists in the path
for i ← 1 to vertex - 1 do
if path[i] = nextVertex then
return false
return true
Explanation of Variables
Variable Description
graph[][] Adjacency matrix representation of the graph
n Total number of vertices
path[] Array storing the current sequence of vertices in the path
vertex Current position in the path being filled
isSafe() Function that ensures valid and unique vertex placement
Initial Setup
Start:
path[1] ← 1 // Start cycle at vertex 1
HamiltonianCycle(2)
Example Dry Run
For a graph with vertices {1, 2, 3, 4} and edges
{(1,2), (2,3), (3,4), (4,1), (1,3)}:
1. path = [1, _, _, _]
2. Try vertex 2 → Safe → path = [1, 2, _, _]
3. Try vertex 3 → Safe → path = [1, 2, 3, _]
4. Try vertex 4 → Safe → path = [1, 2, 3, 4]
5. Check if 4 → 1 edge exists → Yes → Print 1 → 2 → 3 → 4 → 1
Time Complexity
• Worst Case: O(N!)
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Graph Coloring Problem using Backtracking
Given a graph with N vertices, assign colors to all vertices using at most M different colors,
so that no two adjacent vertices have the same color.
If such an assignment exists → the graph is said to be M-colorable.
Example
Graph:
Vertices = {1, 2, 3, 4}
Edges = {(1,2), (1,3), (2,3), (3,4)}
M = 3 (colors: Red, Green, Blue)
Possible coloring:
1 → Red, 2 → Green, 3 → Blue, 4 → Red
Concept (Backtracking Idea)
We assign colors one vertex at a time:
• Try each available color (1 to M).
• Before assigning, check if the color is safe (i.e., no adjacent vertex has the same color).
• If safe → assign color and move to the next vertex.
• If no color is possible → backtrack to the previous vertex and try a different color.
Pseudocode
GraphColoring(vertex)
if vertex > N then
print color[1...N] // A valid coloring is found
return
for c ← 1 to M do
if isSafe(vertex, c) then
color[vertex] ← c
GraphColoring(vertex + 1)
color[vertex] ← 0 // Backtrack (remove color)
Safety Check Function
isSafe(vertex, c)
for i ← 1 to N do
if graph[vertex][i] = 1 and color[i] = c then
return false // adjacent vertex has same color
return true // safe to color
Explanation of Variables
Variable Meaning
graph[][] Adjacency matrix representation of the graph
N Number of vertices
M Number of available colors
color[] Array storing the assigned color for each vertex
vertex Current vertex being colored
isSafe() Checks if assigning color c to vertex is valid
Initial Setup
Start:
for i ← 1 to N do
color[i] ← 0 // no vertex is colored initially
GraphColoring(1)
Example Dry Run
Input:
N = 4, M = 3
Edges: (1,2), (1,3), (2,3), (3,4)
Step 1: Start coloring from vertex 1
Try color 1 → OK → color[1] = 1
Step 2: Vertex 2 → color 1 invalid (adjacent to 1), try color 2 → OK
Step 3: Vertex 3 → color 1 invalid (adjacent to 1), color 2 invalid (adjacent to 2), try color 3 → OK
Step 4: Vertex 4 → color 1 OK → color[4] = 1
Solution: [1, 2, 3, 1] → Valid coloring found.
Example Output
Valid Coloring Found:
Vertex 1 → Color 1
Vertex 2 → Color 2
Vertex 3 → Color 3
Vertex 4 → Color 1
If no valid coloring is found:
No possible coloring with M colors.
Time Complexity
• Worst Case: O(Mⁿ) (each vertex can try all M colors)
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Breadth-First Search (BFS) Algorithm
Given a graph G(V, E) (either directed or undirected) and a starting vertex,
the goal is to visit all vertices reachable from the starting vertex in a breadth-wise (level-order) manner —
i.e., visit all neighbors first before moving to their neighbors.
Example Graph:
Vertices: A, B, C, D, E
Edges: (A,B), (A,C), (B,D), (C,E)
BFS Traversal (starting from A):
A→B→C→D→E
Concept (How BFS Works)
• BFS uses a queue to explore the graph level by level.
• Start from the given source vertex.
• Visit all its adjacent vertices, mark them as visited, and enqueue them.
• Dequeue the next vertex and repeat the process until the queue is empty.
Pseudocode
BFS(startVertex)
create an empty queue Q
mark all vertices as unvisited
mark startVertex as visited
enqueue(startVertex) into Q
while Q is not empty do
current ← dequeue(Q)
print(current) // Visit the vertex
for each neighbor of current do
if neighbor is not visited then
mark neighbor as visited
enqueue(neighbor)
Explanation of Variables
Variable Description
startVertex The vertex from which BFS starts
Q Queue used to store vertices to be explored next
visited[] Boolean array keeping track of visited vertices
current The vertex currently being processed
neighbor Adjacent vertex to current in the graph
Example Dry Run
Input:
Graph:
A: B, C
B: D
C: E
D: -
E: -
Start = A
Steps:
Step Queue Visited Output
Start [A] A
1 [B, C] A, B, C A
2 [C, D] A, B, C, D B
3 [D, E] A, B, C, D, E C
4 [E] A, B, C, D, E D
5 [] A, B, C, D, E E
Final BFS Order: A → B → C → D → E
Example Output
BFS Traversal starting from A:
ABCDE
Time Complexity
Operation Cost
Visiting all vertices O(V)
Checking all edges O(E)
Total O(V + E)
Key Points to Remember
• BFS is a level-order traversal for graphs.
• It uses a Queue (FIFO) for storing vertices to visit next.
• Works for both directed and undirected graphs.
• Often used for shortest path in unweighted graphs.
• Each vertex is enqueued and dequeued once only.
Applications of BFS
1. Finding shortest paths in unweighted graphs.
2. Checking connectivity of a graph.
3. Finding connected components in an undirected graph.
4. Used in web crawlers and AI search algorithms (like Level Search).
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Depth-First Search (DFS) Algorithm
Given a graph G(V, E) (directed or undirected) and a starting vertex,
the goal is to visit all vertices reachable from the start, by going as deep as possible along each path before
backtracking.
Example Graph:
Vertices: A, B, C, D, E
Edges: (A,B), (A,C), (B,D), (C,E)
DFS Traversal (starting from A):
A→B→D→C→E
Concept (How DFS Works)
• DFS explores depth-wise (go as far down a branch as possible).
• Uses a stack (either explicitly or via recursion).
• Start from a given vertex, mark it visited, and recursively visit all its unvisited neighbors.
• When no unvisited neighbors remain, backtrack to explore other branches.
Simplified Pseudocode (Recursive Version)
DFS(vertex)
mark vertex as visited
print(vertex) // Visit the vertex
for each neighbor of vertex do
if neighbor is not visited then
DFS(neighbor) // Recursive call
Explanation of Variables
Variable Description
vertex Current vertex being explored
neighbor Adjacent vertex of the current vertex
visited[] Boolean array to mark whether a vertex is already visited
graph[][] Adjacency list or matrix representation of the graph
Initialization
for each vertex v in graph do
visited[v] ← false
DFS(startVertex)
Example Dry Run
Input:
Graph:
A: B, C
B: D
C: E
D: -
E: -
Start = A
Steps:
Step Current Vertex Action Visited List Output
1 A Visit A, go to B A A
2 B Visit B, go to D A, B B
3 D Visit D (no more neighbors, backtrack) A, B, D D
4 Backtrack to A → next neighbor = C Visit C A, B, C, D C
5 From C → go to E Visit E A, B, C, D, E E
Final DFS Order: A → B → D → C → E
Example Output
DFS Traversal starting from A:
ABDCE
Time Complexity
Operation Cost
Visit all vertices O(V)
Explore all edges O(E)
Total O(V + E)
Key Differences Between BFS and DFS
Feature BFS DFS
Data Structure Queue Stack / Recursion
Traversal Level-wise Depth-wise
Nature Iterative Recursive
Path Found Shortest (unweighted graphs) May not be shortest
Backtracking Not used Used
Key Points to Remember
• DFS is a depth-oriented traversal — go deep, then backtrack.
• It uses recursion or stack to remember previous vertices.
• Each vertex is visited exactly once.
• DFS is used to explore connected components, detect cycles, and solve path-finding problems.
• The order of visiting depends on the adjacency order of vertices.
Applications of DFS
1. Topological Sorting (in directed acyclic graphs).
2. Finding connected components.
3. Detecting cycles in a graph.
4. Solving maze and pathfinding problems.
5. Used in backtracking algorithms (like N-Queens, Hamiltonian Cycle, etc.).
Summary Table
Property DFS
Data Structure Stack / Recursion
Traversal Order Depth-first
Recursive Yes
Time Complexity O(V + E)
Space Complexity O(V)
Applications Connectivity, cycles, pathfinding, topological sort
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
NP-Hard and NP-Complete Problems
1. Introduction to Computational Complexity
When we design algorithms, we care about:
• How fast they run (time complexity)
• How much memory they use (space complexity)
But for many problems, even the best-known algorithms take a very long time (exponential time).
So, computer scientists classify problems into complexity classes like P, NP, NP-Complete, and NP-Hard.
2. Basic Concepts and Definitions
(a) Algorithm Efficiency
• A problem is said to be tractable if it can be solved in polynomial time (for example, O(n²), O(n³)).
• A problem is intractable if the best-known algorithm takes exponential time (for example, O(2ⁿ),
O(n!)).
(b) Class P
• Definition:
The class P contains all decision problems (problems with YES/NO answers) that can be solved in
polynomial time by a deterministic algorithm.
Examples:
o Searching in a sorted array using binary search → O(log n)
o Finding shortest paths in a graph (Dijkstra’s Algorithm) → O(V²)
(c) Class NP
• Definition:
The class NP (Nondeterministic Polynomial time) contains decision problems for which a given
solution can be verified in polynomial time.
In simple words:
o It might be hard to find the solution,
o But once someone gives you the solution, you can verify it quickly.
Examples:
o Hamiltonian Cycle Problem: Given a path, check if it forms a Hamiltonian cycle.
o Subset Sum Problem: Given a subset, check if the sum equals the target.
(d) Relation Between P and NP
• Every problem in P is also in NP (because if you can solve it fast, you can verify it fast).
But we don’t know whether every NP problem can also be solved quickly.
Open Question:
Is P = NP ?
(It’s still unsolved — one of the biggest questions in computer science.)
(e) Polynomial-Time Reduction
To compare two problems A and B:
• If we can transform problem A into problem B in polynomial time,
we write A ≤p B (A reduces to B).
Meaning:
If we can solve B efficiently, then we can also solve A efficiently.
3. NP-Complete Problems
(a) Definition
A problem is NP-Complete if:
1. It belongs to NP (its solution can be verified in polynomial time), and
2. Every problem in NP can be reduced to it in polynomial time.
These are the hardest problems in NP.
(b) Properties of NP-Complete Problems
• If any NP-Complete problem can be solved in polynomial time,
then every NP problem can also be solved in polynomial time.
⇒ That means P = NP.
• No polynomial-time algorithm is known for any NP-Complete problem (so far).
(c) Examples of NP-Complete Problems
1. Satisfiability Problem (SAT) – The first problem proved NP-Complete (Cook’s theorem).
2. 3-SAT Problem – Boolean formula with 3 variables per clause.
3. Hamiltonian Cycle Problem – Find a cycle visiting every vertex once.
4. Subset Sum Problem – Find subset with given sum.
5. Graph Coloring Problem – Color graph vertices using ≤ k colors, no adjacent vertices same color.
6. Travelling Salesman Problem (TSP) – Find minimum-cost cycle visiting all cities exactly once (decision
version).
4. NP-Hard Problems
(a) Definition
A problem is NP-Hard if every NP problem can be reduced to it in polynomial time,
but the NP-Hard problem does not need to be in NP (i.e., it may not be a decision problem).
In short:
NP-Hard problems are at least as hard as NP-Complete problems, or harder.
(b) Relationship Summary
Class Meaning Example
P Solvable in polynomial time Sorting, Shortest Path
NP Verifiable in polynomial time Subset Sum
NP-Complete In NP and as hard as any NP problem 3-SAT, TSP (decision)
NP-Hard As hard as NP-Complete but may not be in NP TSP (optimization)
5. NP-Hard Graph Problems (from the textbook)
(a) Hamiltonian Cycle Problem
• Find a simple cycle that visits each vertex exactly once.
• Decision version → NP-Complete.
• Optimization version → NP-Hard.
(b) Travelling Salesman Problem (TSP)
• Given a set of cities and distances, find the shortest route visiting each city exactly once and
returning to the start.
• Decision version (cost ≤ K) → NP-Complete.
• Optimization version → NP-Hard.
(c) Clique Problem
• Find a subset of vertices such that every pair of vertices is connected by an edge.
• Finding a clique of size k → NP-Complete.
(d) Vertex Cover Problem
• Find a subset of vertices that includes at least one endpoint of every edge.
• Decision version → NP-Complete.
(e) Graph Coloring Problem
• Assign minimum colors to vertices so that no two adjacent vertices share the same color.
• Decision version (colorable with ≤ k colors) → NP-Complete.
• Finding the minimum number of colors → NP-Hard.
6. How to Prove a Problem is NP-Complete
To show that a new problem X is NP-Complete:
1. Step 1: Show that X is in NP (its solution can be verified quickly).
2. Step 2: Choose a known NP-Complete problem Y.
3. Step 3: Show that Y ≤p X (i.e., Y reduces to X in polynomial time).
4. Step 4: Conclude that X is NP-Complete.
7. Importance of NP-Complete and NP-Hard Problems
• Helps classify problems as easy (P) or hard (NP-Complete / NP-Hard).
• Guides researchers to focus on approximation or heuristic algorithms instead of exact ones.
• Forms the foundation of computational theory and complexity analysis.
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Introduction to Parallel Algorithms
Introduction
Modern computing tasks like AI, simulations, and data processing require high performance.
Traditional sequential algorithms process one step at a time — too slow for large data.
Parallel algorithms overcome this by:
• Splitting the task into smaller parts,
• Executing them simultaneously on multiple processors, and
• Combining results efficiently.
Basic Concepts of Parallel Computation
(a) Parallel Processing
• Performing multiple operations simultaneously to solve a problem faster.
• The goal is to reduce total computation time.
(b) Processor
• A single computing unit that executes operations.
• Parallel systems have multiple processors working together.
(c) Task Division
• The main problem is divided into subproblems, each handled by a different processor.
(d) Key Terms
Term Meaning
Speedup (S) How much faster the parallel algorithm runs compared to sequential.
𝑇𝑠𝑒𝑞𝑢𝑒𝑛𝑡𝑖𝑎𝑙
𝑆=
𝑇𝑝𝑎𝑟𝑎𝑙𝑙𝑒𝑙
Efficiency (E) Measures how well processors are utilized.
𝑆
𝐸 = 𝑃, where P = number of processors
Cost (C) Total work done by all processors.
𝐶 = 𝑃 × 𝑇𝑝𝑎𝑟𝑎𝑙𝑙𝑒𝑙
Scalability Ability of an algorithm/system to maintain efficiency when processors increase.
Computational Models for Parallel Algorithms
Parallel systems are represented by theoretical models that describe how processors communicate and share
memory.
(a) Sequential Model (Reference Point)
• A single processor executes instructions one after another.
• Example: Traditional computers (Von Neumann architecture).
(b) Parallel Random Access Machine (PRAM) Model
Definition:
A theoretical model that assumes multiple processors working in synchronous steps, sharing a common
global memory.
Each processor:
• Has an ID (P₁, P₂, …, Pₙ)
• Executes instructions simultaneously
• Can read/write to shared memory in unit time
(c) Types of PRAM Models
Model Read Write Description
No two processors read or write the same memory at once. (Most restrictive,
EREW Exclusive Exclusive
simplest.)
CREW Concurrent Exclusive Many can read same memory, only one can write.
ERCW Exclusive Concurrent Unique reads, but multiple processors may write (rare).
CRCW Concurrent Concurrent Multiple reads and writes allowed (most powerful, complex).
(d) Example: Parallel Sum using PRAM
Goal: Compute sum of n numbers using p processors.
1. Divide array into equal parts.
2. Each processor computes partial sum.
3. Combine results using parallel reduction.
⏱ Time Complexity: O(log n) using n/2 processors (parallel addition tree).
Fundamental Parallel Techniques
Parallel algorithm design is based on common techniques:
Technique Description Example
Decomposition / Partitioning Divide problem into independent tasks. Matrix addition, vector sum
Pipelining Overlap operations for speed. Instruction pipelines in CPUs
Divide and Conquer Subproblems solved concurrently. Merge sort, quicksort
(Parallelized)
Data Parallelism Apply same operation on different data Array addition
parts.
Task Parallelism Different tasks executed simultaneously. Compiler optimization,
simulation
Parallel Algorithm Design Steps
1. Problem Decomposition – Split into independent parts.
2. Task Assignment – Allocate to available processors.
3. Communication Setup – Define data sharing/synchronization.
4. Computation – Perform operations in parallel.
5. Result Combination – Merge outputs into final result.
Interconnection Networks
For multiple processors to communicate, they are connected by network topologies.
Two common static models:
• MESH Network
• HYPERCUBE Network
MESH Network Model
(a) Structure
• Processors arranged in a grid (2D array structure).
• Each processor connected to its immediate neighbors (up, down, left, right).
Example – 3x3 Mesh:
(b) Properties
Property Description
Degree 2 to 4 (depends on position)
Diameter (r + c - 2) for r×c mesh
Advantages Simple, easy to implement
Disadvantages Long communication paths for distant nodes
(c) Applications
• Matrix operations
• Image processing
• Grid-based scientific simulations
Hypercube Network Model
(a) Structure
• Processors are arranged in a binary cube structure.
• Each processor represented by a binary number.
• Two processors connected if their binary addresses differ by exactly one bit.
Example – 3D Hypercube (8 nodes):
(b) Properties
Property Description
Number of Processors 2ⁿ (where n = dimension)
Degree n (each processor connected to n others)
Diameter n (max distance between processors)
Bisection Width 2ⁿ⁻¹ (half cube separation connections)
Advantages Fast communication, high connectivity
Disadvantages Complex hardware for large n
(c) Applications
• Parallel matrix computation
• Sorting networks
• Data routing and searching in parallel systems
Comparison of MESH vs HYPERCUBE
Feature MESH HYPERCUBE
Structure 2D Grid n-dimensional Cube
Processors r×c 2ⁿ
Connections per Node ≤4 n
Communication Speed Slower Faster
Ease of Implementation Simple Complex
Best Use Case Matrix-based problems Recursive / divide-and-conquer algorithms
Example Parallel Algorithms
Problem Parallel Technique Model Used
Sum of N numbers Reduction PRAM
Matrix addition Data parallelism MESH
Matrix multiplication Divide and Conquer MESH
Sorting (bitonic sort) Recursive parallel merge HYPERCUBE
Searching Divide and Conquer PRAM / Hypercube
Performance Measures of Parallel Algorithms
Measure Formula Meaning
Speedup (S) Tₛ / Tₚ How much faster parallel version is
Efficiency (E) S / P Processor utilization
Cost (C) P × Tₚ Total effort of computation
Scalability - How performance improves with more processors
Advantages of Parallel Algorithms
✓ Faster execution of large computations
✓ Better utilization of CPU resources
✓ Enables solving previously intractable problems
✓ Energy-efficient (less idle time)
✓ Ideal for large data (AI, ML, scientific computing)
Challenges in Parallel Algorithms
✓ Synchronization overhead (waiting for others to finish)
✓ Communication delay between processors
✓ Load balancing issues (some processors idle)
✓ Debugging and design complexity
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=