0% found this document useful (0 votes)
14 views13 pages

Bmi 401-Bmsda 403-Design and Analysis of Algorithms - Lec 5

The document discusses advanced algorithmic techniques focusing on NP-completeness, approximation algorithms, and randomized algorithms. It explains NP problems, NP-completeness, and provides examples such as the Traveling Salesman Problem and the Knapsack Problem, detailing their definitions, types, solution methods, and applications. Additionally, it covers reductions between problems to demonstrate NP-completeness and introduces approximation algorithms for optimization problems.

Uploaded by

ultimatedala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views13 pages

Bmi 401-Bmsda 403-Design and Analysis of Algorithms - Lec 5

The document discusses advanced algorithmic techniques focusing on NP-completeness, approximation algorithms, and randomized algorithms. It explains NP problems, NP-completeness, and provides examples such as the Traveling Salesman Problem and the Knapsack Problem, detailing their definitions, types, solution methods, and applications. Additionally, it covers reductions between problems to demonstrate NP-completeness and introduces approximation algorithms for optimization problems.

Uploaded by

ultimatedala
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

BMSDA403-BMI401 DESIGN AND ANALYSIS OF ALGORITHMS

WEEK 3-1: ADVANCED ALGORITHMIC TECHNIQUES

1. NP-completeness and reductions


2. Approximation algorithms (e.g., knapsack problem, traveling
salesman problem)
3. Randomized algorithms (e.g., quick sort, randomized selection)

NP-COMPLETENESS AND REDUCTIONS

What is an NP Problem?

An NP problem is a decision problem that can be solved in polynomial time by a


nondeterministic Turing machine. In other words, an NP problem is a problem where:

1. Given a solution, we can verify it in polynomial time: This means that we can check
whether a given solution is correct or not in a reasonable amount of time.

2. The problem has a finite number of possible solutions: This means that there are only
a finite number of possible solutions to the problem.

Examples of NP Problems

1. The Traveling Salesman Problem: Given a set of cities and their pairwise distances,
find the shortest possible tour that visits each city exactly once and returns to the
starting city.

2. The Knapsack Problem: Given a set of items, each with a weight and a value, and a
knapsack with a limited capacity, determine the subset of items to include in the
knapsack to maximize the total value.

3. The Boolean Satisfiability Problem (SAT): Given a Boolean formula in conjunctive


normal form, determine whether there exists an assignment of values to the variables
that makes the formula true.
What is NP-Completeness?

NP-completeness is a concept in computational complexity theory that refers to the


hardest problems in NP. A problem is NP-complete if:

1. It is an NP problem: The problem can be solved in polynomial time by a


nondeterministic Turing machine.

2. Every NP problem can be reduced to it: Every NP problem can be transformed into
an instance of the NP-complete problem in polynomial time.

Examples of NP-Complete Problems

1. The Traveling Salesman Problem: This problem is NP-complete because it is an NP


problem, and every NP problem can be reduced to it.

2. The Knapsack Problem: This problem is NP-complete because it is an NP problem,


and every NP problem can be reduced to it.

3. The Boolean Satisfiability Problem (SAT): This problem is NP-complete because it is


an NP problem, and every NP problem can be reduced to it.

THE KNAPSACK PROBLEM

what is the Traveling Salesman Problem?

The Traveling Salesman Problem (TSP) is a classic problem in computer science and
operations research that involves finding the shortest possible tour that visits a set of
cities and returns to the starting city.

Problem Statement

Given a set of n cities, each with a pairwise distance, find the shortest possible tour that
visits each city exactly once and returns to the starting city.

Example

Suppose we have 5 cities: A, B, C, D, and E. The pairwise distances between the cities
are:

A B C D E
A 0 10 15 20 25
B 10 0 35 30 20
C 15 35 0 25 30
D 20 30 25 0 15
E 25 20 30 15 0

The goal is to find the shortest possible tour that visits each city exactly once and
returns to the starting city.

Types of TSP

There are several types of TSP:

1. Symmetric TSP: In this type of TSP, the distance between two cities is the same in
both directions.

2. Asymmetric TSP: In this type of TSP, the distance between two cities may be
different in different directions.

3. Metric TSP: In this type of TSP, the distance between two cities satisfies the triangle
inequality.

Solution Methods

There are several solution methods for TSP:

1. Exact Methods: These methods involve solving the TSP exactly using techniques
such as branch and bound, cutting plane, or dynamic programming.

2. Heuristics: These methods involve using heuristics such as the nearest neighbour
algorithm, the 2-opt algorithm, or the Christofides algorithm to find a good but not
necessarily optimal solution.

3. Metaheuristics: These methods involve using metaheuristics such as simulated


annealing, genetic algorithms, or ant colony optimization to find a good solution.

Applications

TSP has many applications in:


1. Logistics: TSP is used in logistics to optimize the route of delivery trucks.

2. Transportation: TSP is used in transportation to optimize the route of buses, trains,


and airplanes.

3. Telecommunications: TSP is used in telecommunications to optimize the route of


fibre optic cables.

4. Robotics: TSP is used in robotics to optimize the route of robots.

Complexity

TSP is an NP-hard problem, which means that the running time of algorithms for solving
TSP increases rapidly as the size of the input increases.

Conclusion

TSP is a classic problem in computer science and operations research that involves
finding the shortest possible tour that visits a set of cities and returns to the starting city.
TSP has many applications in logistics, transportation, telecommunications, and
robotics. While there are several solution methods for TSP, including exact methods,
heuristics, and metaheuristics, TSP remains an NP-hard problem.

Prove that the Travelling Salesman problem is NP -Complete

Problem Description

The Traveling Salesman Problem (TSP) is a classic problem in computer science and
operations research. Given a set of n cities, each with a pairwise distance, the goal is to
find the shortest possible tour that visits each city exactly once and returns to the
starting city.

Known NP-Complete Problem

We will reduce the Hamiltonian Cycle Problem (HCP) to TSP. The HCP is a known NP-
complete problem that involves finding a cycle in a graph that visits each vertex exactly
once.

Reduction

Given an instance of the HCP, we can construct an instance of the TSP as follows:
1. Create a graph with n vertices, each representing a city.

2. Assign a weight of 1 to each edge in the graph if it exists in the original HCP instance,
and a weight of infinity otherwise.

3. Create a TSP instance with the same set of cities and pairwise distances.

Correctness of the Reduction

We need to show that:

1. If the HCP instance has a Hamiltonian cycle, then the TSP instance has a tour of
length n.

2. If the TSP instance has a tour of length n, then the HCP instance has a Hamiltonian
cycle.

(1) HCP => TSP

If the HCP instance has a Hamiltonian cycle, then we can construct a tour in the TSP
instance by following the cycle. Since each edge in the cycle has a weight of 1, the total
length of the tour is n.

(2) TSP => HCP

If the TSP instance has a tour of length n, then we can construct a Hamiltonian cycle in
the HCP instance by following the tour. Since each edge in the tour has a weight of 1, the
tour must visit each vertex exactly once and therefore forms a Hamiltonian cycle.

Conclusion

We have shown that the Traveling Salesman Problem (TSP) is NP-complete by reducing
the Hamiltonian Cycle Problem (HCP) to TSP. The reduction is correct because we have
shown that:

1. If the HCP instance has a Hamiltonian cycle, then the TSP instance has a tour of
length n.
2. If the TSP instance has a tour of length n, then the HCP instance has a Hamiltonian
cycle.

Therefore, TSP is NP-complete.

KNAPSACK PROBLEM

What Is the Knapsack Problem?

The Knapsack Problem is a classic problem in computer science and operations


research that involves finding the optimal way to pack a set of items of different sizes
and values into a knapsack of limited capacity.

Problem Statement

Given a set of n items, each with a weight wi and a value vi, and a knapsack with a
capacity W, determine the subset of items to include in the knapsack to maximize the
total value while not exceeding the knapsack capacity.

Example

Suppose we have 5 items, each with a weight and a value:

Item Weight Value


1 2 10
2 3 20
3 1 5
4 4 30
5 2 15

The knapsack has a capacity of 10. We want to determine the subset of items to include
in the knapsack to maximize the total value while not exceeding the knapsack capacity.

Types of Knapsack Problems

There are several types of knapsack problems:


1. 0/1 Knapsack Problem: This is the most common type of knapsack problem, where
each item can either be included or excluded from the knapsack.

2. Fractional Knapsack Problem: In this type of knapsack problem, each item can be
included in the knapsack in fractions.

3. Multi-Knapsack Problem: In this type of knapsack problem, there are multiple


knapsacks, and each item can be included in one of the knapsacks.

Solution Methods

There are several solution methods for the knapsack problem:

1. Dynamic Programming: This is a popular method for solving the knapsack problem,
which involves breaking down the problem into smaller subproblems and solving each
subproblem only once.

2. Greedy Algorithm: This method involves selecting the item with the highest value-to-
weight ratio at each step, until the knapsack is full.

3. Branch and Bound: This method involve using a tree search algorithm to explore all
possible solutions to the problem.

Example Solution Using Dynamic Programming

Here's a solution to the 0/1 knapsack problem:

Algorithm Description

The algorithm we will use to solve the 0/1 knapsack problem is a dynamic programming
algorithm. The algorithm works by building up a table of solutions to subproblems,
where each subproblem is a smaller instance of the knapsack problem.

Here's a step-by-step description of the algorithm:

1. Initialize the table: Create a table dp of size (n+1) x (W+1), where n is the number of
items and W is the capacity of the knapsack. Initialize the table with zeros.

2. Fill in the table: Iterate over each item i from 1 to n. For each item i, iterate over each
possible weight w from 1 to W. If the weight of item i is less than or equal to w, then
consider including item i in the knapsack. The maximum value that can be obtained by
including item i is dp[i-1] [w-wi] + vi, where wi is the weight of item i and vi is the value
of item i. If the weight of item i is greater than w, then item i cannot be included in the
knapsack, and the maximum value that can be obtained is dp[i-1] [w].

3. Return the solution: The maximum value that can be obtained by filling the knapsack
is stored in dp[n][W]. Return this value.

Proof of Correctness

To prove the correctness of the algorithm, we need to show that the algorithm correctly
computes the maximum value that can be obtained by filling the knapsack.

We can prove this by induction on the number of items.

Base case: If there is only one item, then the algorithm correctly computes the
maximum value that can be obtained by filling the knapsack.

Inductive step: Suppose the algorithm correctly computes the maximum value that
can be obtained by filling the knapsack for i-1 items. We need to show that the algorithm
correctly computes the maximum value that can be obtained by filling the knapsack for
i items.

If the weight of item i is less than or equal to w, then the algorithm considers including
item i in the knapsack. The maximum value that can be obtained by including item i is
dp[i-1] [w-wi] + vi, which is correct by the inductive hypothesis.

If the weight of item i is greater than w, then item i cannot be included in the knapsack,
and the maximum value that can be obtained is dp[i-1] [w], which is correct by the
inductive hypothesis.

Therefore, the algorithm correctly computes the maximum value that can be obtained
by filling the knapsack.

Time Complexity
The time complexity of the algorithm is O(nW), where n is the number of items and W is
the capacity of the knapsack.

This is because the algorithm iterates over each item i from 1 to n, and for each item i, it
iterates over each possible weight w from 1 to W.

Space Complexity

The space complexity of the algorithm is O(nW), where n is the number of items and W
is the capacity of the knapsack.

This is because the algorithm uses a table dp of size (n+1) x (W+1) to store the solutions
to subproblems.

Here's the code for the algorithm:

def knapsack (n, W, weights, values):

dp = [[0 for _ in range(W+1)] for _ in range(n+1)]

for i in range (1, n+1):

for w in range (1, W+1):

if weights[i-1] <= w:

dp[i][w] = max(dp[i-1] [w], dp[i-1] [w-weights[i-1]] + values[i-1])

else:

dp[i][w] = dp[i-1] [w]

return dp[n][W]

# Example usage
n=5

W = 10

weights = [2, 3, 1, 4, 2]

values = [10, 20, 5, 30, 15]

max_value = knapsack (n, W, weights, values)

print ("Maximum value:", max_value)

This code solves the 0/1 knapsack problem using dynamic programming. It initializes a
table dp to store the solutions to subproblems and then fills in the table using a nested
loop. The maximum value that can be obtained by filling the knapsack is stored in
dp[n][W], where n is the number of items and W is the capacity of the knapsack.

Complexity

The knapsack problem is NP-complete, which means that the running time of
algorithms for solving the problem increases rapidly as the size of the input increases.

Applications

The knapsack problem has many applications in:

1. Resource Allocation: The knapsack problem can be used to allocate resources such
as memory, CPU time, and bandwidth.

2. Financial Portfolio Optimization: The knapsack problem can be used to optimize


financial portfolios by selecting the optimal subset of assets to include in the portfolio.

3. Logistics and Supply Chain Management: The knapsack problem can be used to
optimize logistics and supply chain management by selecting the optimal subset of
items to include in a shipment.

Conclusion

The knapsack problem is a classic problem in computer science and operations


research that involves finding the optimal way to pack a set of items of different sizes
and values into a knapsack of limited capacity. The problem has many applications in
resource allocation, financial portfolio optimization, and logistics and supply chain
management.
REDUCTIONS

A reduction is a way to transform one problem into another problem. In the context of
NP-completeness, a reduction is a way to transform an NP problem into an NP-
complete problem.

For example, we can reduce the Knapsack Problem to the Traveling Salesman Problem
by transforming the Knapsack Problem into a Traveling Salesman Problem instance.

Here's an example of how to reduce the Knapsack Problem to the Traveling Salesman
Problem:

1. Create a graph: Create a graph with a node for each item in the Knapsack Problem.

2. Add edges: Add edges between nodes to represent the pairwise distances between
items.

3. Assign weights: Assign weights to the edges to represent the values of the items.

4. Find the shortest tour: Find the shortest tour that visits each node exactly once and
returns to the starting node.

The reduction shows that if we can solve the Traveling Salesman Problem in polynomial
time, we can also solve the Knapsack Problem in polynomial time.

Conclusion

NP problems and NP-completeness are fundamental concepts in computational


complexity theory. NP problems are decision problems that can be solved in polynomial
time by a nondeterministic Turing machine, while NP-completeness refers to the
hardest problems in NP.

Reductions are a way to transform one problem into another problem and are used to
show that a problem is NP-complete.
Understanding NP problems and NP-completeness is essential for designing efficient
algorithms and understanding the limitations of computation.

APPROXIMATION ALGORITHMS

What are Approximation Algorithms?

Approximation algorithms are algorithms that find near-optimal solutions to


optimization problems. These algorithms are used when the optimal solution is too
difficult or time-consuming to find.

Examples of Approximation Algorithms

1. Knapsack Problem: The knapsack problem is an optimization problem where you


have a set of items, each with a weight and a value, and you want to determine the
subset of items to include in a knapsack of limited capacity to maximize the total value.

2. Traveling Salesman Problem: The traveling salesman problem is an optimization


problem where you have a set of cities, and you want to find the shortest possible tour
that visits each city exactly once and returns to the starting city.

RANDOMIZED ALGORITHMS

What are Randomized Algorithms?

Randomized algorithms are algorithms that use randomness to make decisions. These
algorithms are used when the problem is too difficult or time-consuming to solve
exactly.

Examples of Randomized Algorithms

1. Quick Sort: Quick sort is a sorting algorithm that uses randomness to select a pivot
element. The pivot element is used to partition the array, and the algorithm is
recursively applied to the subarrays.

2. Randomized Selection: Randomized selection is an algorithm that finds the k-th


smallest element in an unsorted array. The algorithm uses randomness to select a pivot
element, and the pivot element is used to partition the array.
Key Concepts

1. NP-Completeness: NP-completeness is a concept in computational complexity


theory that refers to the hardest problems in NP.

2. Reductions: Reductions are a way to show that one problem is at least as hard as
another problem.

3. Approximation Algorithms: Approximation algorithms are algorithms that find near-


optimal solutions to optimization problems.

4. Randomized Algorithms: Randomized algorithms are algorithms that use


randomness to make decisions.

Applications

1. Cryptography: NP-complete problems are used in cryptography to create secure


encryption algorithms.

2. Optimization: Approximation algorithms are used in optimization to find near-


optimal solutions to complex problems.

3. Machine Learning: Randomized algorithms are used in machine learning to train


models and make predictions.

You might also like