Lecture13_IO_BLG336E
Lecture13_IO_BLG336E
Analysis of Algorithms II
Lecture 13:
NP-Completeness and Review
1
Basic reduction strategies
Dick Karp
(1972)
1985 Turing
Award
INDEPENDENT SET DIR-HAM-CYCLE GRAPH 3-COLOR SUBSET-SUM
3
Definition of P
Grade school
MULTIPLE Is x a multiple of y? 51, 17 51, 16
division
RELPRIME Are x and y relatively prime? Euclid (300 BCE) 34, 39 34, 51
5
NP
Def. Algorithm C(s, t) is a certifier for problem X if for every string s, s X iff there
exists a string t such that C(s, t) = yes.
"certificate" or "witness"
6
Certifiers and Certificates: 3-Satisfiability
Certifier. Check that each clause in has at least one true literal.
Ex.
( x1 x2 x3 ) ( x1 x2 x3 ) ( x1 x2 x4 ) (x1 x3 x4 )
instance s
x1 =1, x2 =1, x3 = 0, x4 =1
certificate t
8
Certifiers and Certificates: Hamiltonian Cycle
instance s certificate t
9
P, NP, EXP
Claim. P NP.
Pf. Consider any problem X in P.
◼ By definition, there exists a poly-time algorithm A(s) that solves X.
◼ Certificate: t = , certifier C(s, t) = A(s). ▪
Claim. NP EXP.
Pf. Consider any problem X in NP.
◼ By definition, there exists a poly-time certifier C(s, t) for X.
◼ To solve input s, run C(s, t) on all strings t with |t| p(|s|).
◼ Return yes, if C(s, t) returns yes for any of these. ▪
10
The Main Question: P Versus NP
EXP NP EXP
P P = NP
If P NP If P = NP
11
NP-Completeness
NP-Complete
13
NP-Completeness
by definition of NP-completeness
CIRCUIT-SAT
3-SAT
14
REVIEW
15
It’s been a fun ride…
General approach
to algorithm design and analysis
Detail-oriented Big-picture
Precise Intuitive
Rigorous Hand-wavey
Algorithm designer
We needed more details
Does it work? What
Is it fast? does that
mean??
Here is an 𝑇 𝑛 =𝑂 𝑓 𝑛
input! ⟺
∃𝑐, 𝑛0 > 0 𝑠. 𝑡. ∀𝑛 ≥ 𝑛0 ,
0 ≤ 𝑇 𝑛 ≤ 𝑐 ⋅ 𝑓(𝑛)
Algorithm design paradigm:
divide and conquer
• Like MergeSort!
Big
• Or Karatsuba’s algorithm! problem
• Or SELECT!
• How do we analyze these?
Smaller Smaller
By careful Useful shortcut, the problem problem
analysis! master method is.
T(n) n
...
T(2) T(2) T(2) T(2) T(2) T(2) T(2) T(2) n/2 (2)
n log2n
20
Recap-Why Recurrences?
• The complexity of many interesting algorithms is
easily expressed as a recurrence – especially divide
and conquer algorithms
• The form of the algorithm often yields the form of
the recurrence
• The complexity of recursive algorithms is readily
expressed as a recurrence.
21
Recap-Why solve recurrences?
• To make it easier to compare the complexity of two
algorithms
• To make it easier to compare the complexity of the
algorithm to standard reference functions.
22
𝑛
𝑇 𝑛 =𝑎⋅𝑇 + 𝑂 𝑛𝑑 .
𝑏
Recap-Master Theorem
✓
• Needlessly recursive integer mult.
a=4
• T(n) = 4 T(n/2) + O(n) b=2 a > bd
• T(n) = O( n2 ) d=1
✓
• Karatsuba integer multiplication
a=3
• T(n) = 3 T(n/2) + O(n) b=2 a > bd
• T(n) = O( nlog_2(3) ≈ n1.6 ) d=1
✓
• MergeSort
a=2
• T(n) = 2T(n/2) + O(n) b=2 a = bd
• T(n) = O( nlog(n) ) d=1
✓
• That other one
a=1
• T(n) = T(n/2) + O(n)
b=2 a < bd
• T(n) = O(n) d=1
While we’re on the topic of sorting
Why not use randomness?
• We analyzed QuickSort!
• Still worst-case input, but we use randomness after
the input is chosen.
• Always correct, usually fast.
• This is a Las Vegas algorithm
All this sorting is making me wonder…
Can we do better?
• Depends on who you ask:
≤
beyond sorted arrays/linked lists:
Binary Search Trees!
• Useful data structure!
• Especially the self-balancing ones!
5
3 7
Maintain balance by stipulating that
black nodes are balanced, and 2 4 6 8
that there aren’t too many red
nodes.
It’s just good sense!
Another way to store things
Hash tables!
hash function h
Instead, an Big
algorithmic problem
paradigm!
• Step 1: Identify optimal substructure.
sub sub
• Step 2: Find a recursive formulation problem
sub
problem
sub
problem problem
35
Sometimes we can take even better advantage of
optimal substructure…with
Greedy algorithms
• Make a series of choices, and commit!
3 1
4 5 3
1
s
2 t
4
5 2 3
1 4
1 5
5
Example of Ford-Fulkerson
3 3
4
4 1 3
4 6
s 2 2 3 4 4 t
3
5
3 1
6 4 4
5
10
3
1
4 5 3
1
s
2 t
3 4
5 2 4
1
1 5
5
Example of Ford-Fulkerson
3 3
4
4 1 3
4 6
s 2 2 3 4 4 t
3
5 Max flow and min
3 1 cut are both 11.
6 4 4
5
10
3
1
4 5 3
1
s
2 t
3 4
There’s no path 5 2 4
from s to t, and
1
1 5
here’s the cut to
prove it. 5
Doing Ford-Fulkerson with BFS is called
the Edmonds-Karp algorithm.
Theorem
• If you use BFS, the Ford-Fulkerson algorithm runs in
time O(nm2). Doesn’t have anything to do with the edge weights!
1 1
3 4
4 3 6
4
3 3
3 3 3
3
1
s 1 10
1 1 3 t
1
7 10 1
5
1 3
2 6
2
This student can 6
2 10
have flow at most
10 going in, and so 2 6
at most 10 going
out, so at most 10
scoops assigned.
As before, flows correspond to assignments, and
max flows correspond to max assignments.
Recap
• Today we talked about s-t cuts and s-t flows.
• The Min-Cut Max-Flow Theorem says that minimizing
the cost of cuts is the same as maximizing the value of
flows.
• The Ford-Fulkerson algorithm does this!
• Find an augmenting path
• Increase the flow along that path
• Repeat until you can’t find any more paths and then you’re
done!
• An important algorithmic primitive!
• eg, assignment problems.
Complexity definitions
• Big-Oh
• Big-Theta
• Big - Omega
Big Oh (O)
f(n)= O(g(n)) iff there exist positive constants c and n0 such that
f(n) ≤ cg(n) for all n ≥ n0