0% found this document useful (0 votes)
3 views58 pages

Lecture13_IO_BLG336E

The document discusses NP-completeness and various reduction strategies in algorithm analysis, emphasizing the relationships between different NP problems such as VERTEX-COVER, INDEPENDENT-SET, and 3-SAT. It outlines the definitions of P, NP, and EXP, and presents the main question of whether P equals NP, which has significant implications for computational complexity. The document also reviews algorithm design paradigms, including dynamic programming and greedy algorithms, and highlights key concepts like certifiers and certificates in NP problems.

Uploaded by

pearsonicin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views58 pages

Lecture13_IO_BLG336E

The document discusses NP-completeness and various reduction strategies in algorithm analysis, emphasizing the relationships between different NP problems such as VERTEX-COVER, INDEPENDENT-SET, and 3-SAT. It outlines the definitions of P, NP, and EXP, and presents the main question of whether P equals NP, which has significant implications for computational complexity. The document also reviews algorithm design paradigms, including dynamic programming and greedy algorithms, and highlights key concepts like certifiers and certificates in NP problems.

Uploaded by

pearsonicin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 58

BLG 336E

Analysis of Algorithms II
Lecture 13:
NP-Completeness and Review

1
Basic reduction strategies

1. Reduction by simple equivalence.


2. Reduction from special case to general case.
3. Reduction by encoding with gadgets.

1. Claim. VERTEX-COVER P INDEPENDENT-SET.


Pf. We show S is an independent set iff V − S is a vertex cover.

2. Claim. VERTEX-COVER  P SET-COVER.


Pf. Given a VERTEX-COVER instance G = (V, E), k, we construct a set cover instance
whose size equals the size of the vertex cover instance.

3.Claim. 3-SAT  P INDEPENDENT-SET.


Pf. Given an instance  of 3-SAT, we construct an instance (G, k) of
INDEPENDENT-SET that has an independent set of size k iff  is satisfiable.
Polynomial-Time Reductions

constraint satisfaction 3-SAT

Dick Karp
(1972)
1985 Turing
Award
INDEPENDENT SET DIR-HAM-CYCLE GRAPH 3-COLOR SUBSET-SUM

VERTEX COVER HAM-CYCLE PLANAR 3-COLOR SCHEDULING

SET COVER TSP

packing and covering sequencing partitioning numerical

3
Definition of P

P. Decision problems for which there is a poly-time algorithm.

Problem Description Algorithm Yes No

Grade school
MULTIPLE Is x a multiple of y? 51, 17 51, 16
division

RELPRIME Are x and y relatively prime? Euclid (300 BCE) 34, 39 34, 51

PRIMES Is x prime? AKS (2002) 53 51

EDIT- Is the edit distance between Dynamic niether acgggt


DISTANCE x and y less than 5? programming neither ttttta

Is there a vector x that Gauss-Edmonds 0 1 1


 2 4 −2  ,
 4
 2
1 0 0
1 1 1 ,
1
1
LSOLVE
satisfies Ax = b? elimination  
 0 3 15
 
36
 
0 1 1
 
1

5
NP

Certification algorithm intuition.

◼ Certifier views things from "managerial" viewpoint.


◼ Certifier doesn't determine whether s  X on its own;
rather, it checks a proposed proof t that s  X.

Def. Algorithm C(s, t) is a certifier for problem X if for every string s, s  X iff there
exists a string t such that C(s, t) = yes.

"certificate" or "witness"

NP. Decision problems for which there exists a poly-time certifier.

C(s, t) is a poly-time algorithm and


|t|  p(|s|) for some polynomial p().

Remark. NP stands for nondeterministic polynomial-time.

6
Certifiers and Certificates: 3-Satisfiability

SAT. Given a CNF formula , is there a satisfying assignment?

Certificate. An assignment of truth values to the n boolean variables.

Certifier. Check that each clause in  has at least one true literal.

Ex.
( x1  x2  x3 )  ( x1  x2  x3 )  ( x1  x2  x4 )  (x1  x3  x4 )

instance s

x1 =1, x2 =1, x3 = 0, x4 =1
certificate t

Conclusion. SAT is in NP.

8
Certifiers and Certificates: Hamiltonian Cycle

HAM-CYCLE. Given an undirected graph G = (V, E), does there exist a


simple cycle C that visits every node?

Certificate. A permutation of the n nodes.

Certifier. Check that the permutation contains each node in V exactly


once, and that there is an edge between each pair of adjacent nodes
in the permutation.

Conclusion. HAM-CYCLE is in NP.

instance s certificate t

9
P, NP, EXP

P. Decision problems for which there is a poly-time algorithm.


EXP. Decision problems for which there is an exponential-time algorithm.
NP. Decision problems for which there is a poly-time certifier.

Claim. P  NP.
Pf. Consider any problem X in P.
◼ By definition, there exists a poly-time algorithm A(s) that solves X.
◼ Certificate: t = , certifier C(s, t) = A(s). ▪

Claim. NP  EXP.
Pf. Consider any problem X in NP.
◼ By definition, there exists a poly-time certifier C(s, t) for X.
◼ To solve input s, run C(s, t) on all strings t with |t|  p(|s|).
◼ Return yes, if C(s, t) returns yes for any of these. ▪

10
The Main Question: P Versus NP

Does P = NP? [Cook 1971, Edmonds, Levin, Yablonski, Gödel]


◼Is the decision problem as easy as the certification problem?
◼Clay $1 million prize.

EXP NP EXP
P P = NP

If P  NP If P = NP

would break RSA cryptography


(and potentially collapse economy)

If yes: Efficient algorithms for 3-COLOR, TSP, FACTOR, SAT, …


If no: No efficient algorithms possible for 3-COLOR, TSP, SAT, …

Consensus opinion on P = NP? Probably no.

11
NP-Completeness
NP-Complete

NP-complete. A problem Y in NP with the property that for every


problem X in NP, X  p Y.

Theorem. Suppose Y is an NP-complete problem. Then Y is solvable in


poly-time iff P = NP.
Pf.  If P = NP then Y can be solved in poly-time since Y is in NP.
Pf.  Suppose Y can be solved in poly-time.
◼ Let X be any problem in NP. Since X  p Y, we can solve X in
poly-time. This implies NP  P.
◼ We already know P  NP. Thus P = NP. ▪

Fundamental question. Do there exist "natural" NP-complete problems?

13
NP-Completeness

Observation. All problems below are NP-complete and polynomial


reduce to one another!

by definition of NP-completeness
CIRCUIT-SAT

3-SAT

INDEPENDENT SET DIR-HAM-CYCLE GRAPH 3-COLOR SUBSET-SUM

VERTEX COVER HAM-CYCLE PLANAR 3-COLOR SCHEDULING

SET COVER TSP

14
REVIEW

15
It’s been a fun ride…
General approach
to algorithm design and analysis

Can I do better? To answer this question we need


both rigor and intuition:

Detail-oriented Big-picture
Precise Intuitive
Rigorous Hand-wavey

Algorithm designer
We needed more details
Does it work? What
Is it fast? does that
mean??

Worst-case analysis big-Oh notation

Here is an 𝑇 𝑛 =𝑂 𝑓 𝑛
input! ⟺
∃𝑐, 𝑛0 > 0 𝑠. 𝑡. ∀𝑛 ≥ 𝑛0 ,
0 ≤ 𝑇 𝑛 ≤ 𝑐 ⋅ 𝑓(𝑛)
Algorithm design paradigm:
divide and conquer
• Like MergeSort!
Big
• Or Karatsuba’s algorithm! problem
• Or SELECT!
• How do we analyze these?
Smaller Smaller
By careful Useful shortcut, the problem problem
analysis! master method is.

Yet Yet Yet Yet


smaller smaller smaller smaller
problem problem problem problem
Jedi master Yoda
Recap- Recursion Tree

T(n) n

T(n/2) T(n/2) 2(n/2)

T(n/4) T(n/4) T(n/4) T(n/4) 4(n/4)


log2n
...
T(n / 2k) 2k (n / 2k)

...

T(2) T(2) T(2) T(2) T(2) T(2) T(2) T(2) n/2 (2)

n log2n
20
Recap-Why Recurrences?
• The complexity of many interesting algorithms is
easily expressed as a recurrence – especially divide
and conquer algorithms
• The form of the algorithm often yields the form of
the recurrence
• The complexity of recursive algorithms is readily
expressed as a recurrence.

21
Recap-Why solve recurrences?
• To make it easier to compare the complexity of two
algorithms
• To make it easier to compare the complexity of the
algorithm to standard reference functions.

22
𝑛
𝑇 𝑛 =𝑎⋅𝑇 + 𝑂 𝑛𝑑 .
𝑏

Recap-Master Theorem


• Needlessly recursive integer mult.
a=4
• T(n) = 4 T(n/2) + O(n) b=2 a > bd
• T(n) = O( n2 ) d=1


• Karatsuba integer multiplication
a=3
• T(n) = 3 T(n/2) + O(n) b=2 a > bd
• T(n) = O( nlog_2(3) ≈ n1.6 ) d=1


• MergeSort
a=2
• T(n) = 2T(n/2) + O(n) b=2 a = bd
• T(n) = O( nlog(n) ) d=1


• That other one
a=1
• T(n) = T(n/2) + O(n)
b=2 a < bd
• T(n) = O(n) d=1
While we’re on the topic of sorting
Why not use randomness?
• We analyzed QuickSort!
• Still worst-case input, but we use randomness after
the input is chosen.
• Always correct, usually fast.
• This is a Las Vegas algorithm
All this sorting is making me wonder…
Can we do better?
• Depends on who you ask:

• RadixSort takes time O(n) if • Can’t do better in a


the objects are, for comparison-based model.
example, small integers!


beyond sorted arrays/linked lists:
Binary Search Trees!
• Useful data structure!
• Especially the self-balancing ones!

5
3 7
Maintain balance by stipulating that
black nodes are balanced, and 2 4 6 8
that there aren’t too many red
nodes.
It’s just good sense!
Another way to store things
Hash tables!

hash function h

Choose h randomly from a


universal hash family.

It’s better if the hash


family is small!
Then it takes less
space to store h. Some buckets
OMG GRAPHS
• BFS, DFS, and applications!
• SCCs, Topological sorting, …
A fundamental graph problem:
shortest paths
• Eg, transit planning, packet
routing, …
• Dijkstra!
Bellman-Ford and Floyd-Warshall
were examples of…
We saw many other
• Not programming in an action movie. examples, including Longest
Common Subsequence and
Knapsack Problems.

Instead, an Big
algorithmic problem
paradigm!
• Step 1: Identify optimal substructure.
sub sub
• Step 2: Find a recursive formulation problem
sub
problem
sub
problem problem

for the value of the optimal solution.


• Steps 3-5: Use dynamic programming:
fill in a table to find the answer! sub
sub sub sub
sub
sub sub sub
prob
prob prob prob
• Dynamic programming is an algorithm design
paradigm.
• Basic idea:
• Identify optimal sub-structure
• Optimum to the big problem is built out of optima of small
sub-problems
• Take advantage of overlapping sub-problems
• Only solve each sub-problem once, then use it again and again
• Keep track of the solutions to sub-problems in a table
as you build to the final solution.
Dynamic Programming-Recap
• We saw examples of how to come up with dynamic
programming algorithms.
• Longest Common Subsequence
• Knapsack two ways
• (If time) maximal independent set in trees.
• There is a recipe for dynamic programming
algorithms.
Recipe for applying Dynamic Programming
• Step 1: Identify optimal substructure.
• Step 2: Find a recursive formulation for the length
of the longest common subsequence.
• Step 3: Use dynamic programming to find the
length of the longest common subsequence.
• Step 4: If needed, keep track of some additional
info so that the algorithm from Step 3 can find the
actual LCS.
• Step 5: If needed, code this up like a reasonable
person.
Knapsack problem

Goal. Pack knapsack so as to maximize total value of items taken.


・There are n items: item i provides value vi > 0 and weighs wi > 0.
・Value of a subset of items = sum of values of individual items.
・Knapsack has weight limit of W.
Ex. The subset { 1, 2, 5 } has value $ 3 5 (and weight 10).
Ex. The subset { 3, 4 } has value $ 4 0 (and weight 11).

Assumption. All values and weights are integral.


i vi wi
1 $1 1 kg
weights and values
2 $6 2 kg can be arbitrary
positive integers
3 $18 5 kg
4 $22 6 kg
5 $28 7 kg

Creative Commons Attribution-Share Alike 2.5


knapsackinstance
by Dake (weightlimit W= 1 1 ) 31
Dynamic programming: two variables

35
Sometimes we can take even better advantage of
optimal substructure…with
Greedy algorithms
• Make a series of choices, and commit!

• Intuitively we want to show that our greedy choices never


rule out success.
• Rigorously, we usually analyzed these by induction.
• Examples!
• Activity Selection
• Job Scheduling
• Huffman Coding
• Minimum Spanning Trees
Cuts and flows
• Global minimum cut:
• Karger’s algorithm!
• minimum s-t cut:
Karger’s algorithm is a
• is the same as maximum s-t flow! Monte-Carlo algorithm:
• Ford-Fulkerson can find them! it is always fast
• useful for routing but might be wrong.
• also assignment problems
3
3
3 4
4 6
2 6 1
4
2 4
s 2 3 4 t
3
5 2 1 6
3 4
6 4
5
10
What have we learned?
• Max s-t flow is equal to min s-t cut!
• The USSR and the USA were trying to
solve the same problem…
• The Ford-Fulkerson algorithm can
find the min-cut/max-flow.
• Repeatedly improve your flow along
an augmenting path.
• How long does this take???
Example of Ford-Fulkerson
3 0
4
4 0 0
0 6
s 0 2 0 4 0 t
3
0
3 0
6 0 4
0
10
3
4 4
6
3 4
s t
2
3
6 4
10
Example of Ford-Fulkerson
3 0
4
4 0 0
0 6
s 0 2 0 4 0 t
3
0
3 0
6 0 4
0
10
3
4 4
6
3 4
s t
2
3
6 4
10
Example of Ford-Fulkerson
3 0
4
4 4 0
4 6
s 0 2 0 4 4 t
3
0
3 0
6 0 4
0
10
3
4
4 2
4
3 4
s t
2
3
6 4
10
Example of Ford-Fulkerson
3 0
4
4 4 0
4 6
s 0 2 0 4 4 t
3
0
3 0
6 0 4
0
10
3
4
4 2
4
3 4
s t
2
3
6 4
10
Example of Ford-Fulkerson
3 0
4
4 4 0
4 6
s 0 2 0 4 4 t
3
4
3 0
6 4 4
4
10
3
4
4 2
4
3 4
s t
2
4 3 4
2 4
6
Example of Ford-Fulkerson We will remove flow
from this edge.
3 0
4
4 4 0
4 6
s 0 2 0 4 4 t
3
4
3 0
6 4 4
4
10
Notice that we’re
going back along one
3
of the backwards 4
edges we added. 4 2
4
s 3 t
2 4
4 3 4
2 4
6
Example of Ford-Fulkerson We will remove flow
from this edge.
3 2
4
4 2 2
4 6
s 2 2 2 4 4 t
3
4
3 0
6 4 4
4
10
Notice that we’re
going back along one
1
of the backwards
2 2
edges we added. 4 4 2
2 2
s 1 t
4
4 3 2
4
2 4
6
Example of Ford-Fulkerson We will remove flow
from this edge AGAIN.
3 2
4
4 2 2
4 6
s 2 2 2 4 4 t
3
4
3 0
6 4 4
4
10
1
2 2
4 4 2
2
s
2 1 t
4
4 3 2
4
2 4
6
Example of Ford-Fulkerson We will remove flow
from this edge AGAIN.
3 3
4
4 1 3
4 6
s 2 2 3 4 4 t
3
5
3 1
6 4 4
5
10

3 1
4 5 3
1
s
2 t
4
5 2 3
1 4
1 5
5
Example of Ford-Fulkerson
3 3
4
4 1 3
4 6
s 2 2 3 4 4 t
3
5
3 1
6 4 4
5
10
3
1
4 5 3
1
s
2 t
3 4
5 2 4
1
1 5
5
Example of Ford-Fulkerson
3 3
4
4 1 3
4 6
s 2 2 3 4 4 t
3
5 Max flow and min
3 1 cut are both 11.
6 4 4
5
10
3
1
4 5 3
1
s
2 t
3 4
There’s no path 5 2 4
from s to t, and
1
1 5
here’s the cut to
prove it. 5
Doing Ford-Fulkerson with BFS is called
the Edmonds-Karp algorithm.
Theorem
• If you use BFS, the Ford-Fulkerson algorithm runs in
time O(nm2). Doesn’t have anything to do with the edge weights!

• We will skip the proof in class.


• Basic idea:
• The number of times you remove an edge from the residual
graph is O(n).
• This is the hard part
• There are at most m edges.
• Each time we remove an edge we run BFS, which takes time
O(n+m).
• Actually, O(m), since we don’t need to explore the whole graph, just
the stuff reachable from s.
No more than 3
scoops of sorbet
Solution via max flow can be assigned.

1 1
3 4
4 3 6
4
3 3
3 3 3
3
1
s 1 10
1 1 3 t
1
7 10 1
5
1 3
2 6
2
This student can 6
2 10
have flow at most
10 going in, and so 2 6
at most 10 going
out, so at most 10
scoops assigned.
As before, flows correspond to assignments, and
max flows correspond to max assignments.
Recap
• Today we talked about s-t cuts and s-t flows.
• The Min-Cut Max-Flow Theorem says that minimizing
the cost of cuts is the same as maximizing the value of
flows.
• The Ford-Fulkerson algorithm does this!
• Find an augmenting path
• Increase the flow along that path
• Repeat until you can’t find any more paths and then you’re
done!
• An important algorithmic primitive!
• eg, assignment problems.
Complexity definitions
• Big-Oh
• Big-Theta
• Big - Omega
Big Oh (O)

f(n)= O(g(n)) iff there exist positive constants c and n0 such that
f(n) ≤ cg(n) for all n ≥ n0

O-notation to give an upper bound on a function


Omega Notation

Big oh provides an asymptotic upper bound on a function.


Omega provides an asymptotic lower bound on a function.
Theta Notation

Theta notation is used when function f can be bounded both


from above and below by the same function g
How bad is exponential complexity
• Fibonacci example – the recursive fib cannot
even compute fib(50)
What have we learned?
We’ve filled out a toolbox
• Tons of examples give us intuition about what
algorithmic techniques might work when.
• The technical skills make sure our intuition
works out.
But there’s lots more out there
THANK YOU ALL!!!

You might also like