daa_unit-ii
daa_unit-ii
A II
Unit-
Greedy and
Dynamic
Program
ming
1
Greedy
Method
2
Greedy Method
Greedy Principal: are typically used to
solve optimization problem. Most of these
problems have n inputs and require us to
obtain a subset that satisfies some
constraints. Any subset that
satisfies these constraints is called a
feasible solution. We are required to find a
feasible solution that either minimizes or
maximizes a given objective function. In
the most common situation we have:
Greedy Method
Subset Paradigm : Devise an algorithm
that works in stages
Consider the inputs in an order based on
some selection procedure
Use some optimization measure for
selection
procedure
–At every stage, examine an input
to see whether it leads to an
optimal solution
If the inclusion of input into partial
solution yields an infeasible solution,
discard the input; otherwise, add it to 3 -4
Control Abstraction for
Greedy
function greedy(C: set): set;
begin
S := Ø;
while (not solution(S) and C ≠ Ø ) do
begin
x := select(C);
C := C - {x};
if feasible(S{x})
then S := S {x};
end;
if solution(S) then return(S) else return(Ø);
end;
Control Abstraction
Expalination
C: A set of objects;
S: The set of objects that have
already been used;
feasible(): A function that checks if
a set is a feasible solution;
solution(): A function that checks if
a set
provides a solution;
select(): A function for choosing
most next object.
An objective function that we
are trying
to optimize. 3 -6
Greedy Vs Divide and
Conquer
Greedy Divide and Conquer
Used when need to find No optimal solution, used
optimal solution when problem have only
one solution
Does not work work parallel by dividing big
parallel problem in smaller sub problem
and running then parallel
Example: Knapsack, Example: Sorting, Searching
Activity selection
3 -7
Knapsack problem
using
Greedy Method
8
The Knapsack Problems
The knapsack problem is a
problem of optimization: Given a
set of items n, each with a weight
w and profit p, determine the
number of each item to include in
a kanpsack such that the
total weight is less than or equal
to a given knapsack limit M and
the total Profit is maximum.
The Knapsack Problems
The Integer Knapsack
Problem
Maximiz
e
n pi x i
i1
n
Subject
to wx
i1
i i ≤
12
JOB SEQUENCING WITH
DEADLINES
The problem is stated as below.
There are n jobs to be processed on a
machine.
Each job i has a deadline di≥ 0 and profit
pi≥0 .
Pi is earned iff the job is completed
by its
Onl one machine is
deadline. available for
y
The processing
job is completed if it is
jobs
processed ona machine for unit time.
.machin
one job is processed at a time
e.
Onl on the
y 13
JOB SEQUENCING
WITH
DEADLINES (Contd..)
A solution is subset jobs J
feasible a of
such that each job is complet by its
deadline.
ed
An optimal solution is a feasible
15
GREEDY ALGORITHM TO OBTAIN AN
OPTIMAL SOLUTION for Job
Scheduling
Consider the jobs in the decreasing
order of profits subject to the
constraint that the resulting job
sequence J is a feasible solution.
In the example
considered before, the
decreasing
(100 27 15profit vector
(2 is1 2
10) 1)
p1 p4 p3 d1 d4 d3
p2 d2
16
GREEDY ALGORITHM TO OBTAIN AN
OPTIMAL SOLUTION (Contd..)
J = { 1} is a feasible one
J = { 1, 4} is a feasible one with
processing
sequence ( 4,1)
J = { 1, 3, 4} is not feasible
J = { 1, 2, 4} is not feasible
J = { 1, 4} is optimal
17
Activity Selection
Using
Greedy Method
18
The activity selection
problem
Problem: n activities, S = {1, 2, …, n},
each activity i has a start time si and a
finish time fi, si fi.
Activity i occupies time interval [si, fi].
i and j are compatible if si fj or sj fi.
The problem is to select a maximum-
size set of mutually compatible
activities
3 -19
Example:
i 1 2 3 4 5 6 7 8 9 10 11
si 1 3 0 5 3 5 6 8 8 2 12
fi 4 5 6 7 8 9 10 11 12 13 14
22
Dynamic Programming
Principal: Dynamic Programming is an
algorithmic paradigm that solves a given complex
problem by breaking it into subproblems and stores
the results of subproblems to avoid computing the
same results again.
Following are the two main properties of a
problem that suggest that the given problem can be
solved using Dynamic programming.
1) Overlapping Subproblems
2) Optimal Substructure
Dynamic Programming
1) Overlapping Subproblems: Dynamic
Programming is mainly used when solutions of
same subproblems are needed again and
again. In dynamic programming, computed
solutions to subproblems are stored in a table
so that these don’t have to recomputed.
3 -24
The principle of optimality
Dynamic programming is a
technique
for finding an optimal solution
25
Differences between
Greedy, D&C and
Dynamic
Greedy. Build up a solution
incrementally,
myopically optimizing some local
criterion.
Divide-and-conquer. Break up a problem
into two sub-problems, solve each sub-
problem independently, and combine solution
to sub-problems to form solution to original
problem.
Dynamic programming. Break up a problem
into a series of overlapping sub-problems, 26
Divide and conquer
Vs Dynamic
Programming
Divide-and-Conquer: a top-down
approach.
Many smaller instances are
computed more than once.
Dynamic programming: a
bottom-up approach.
Solutions for smaller
instances are stored in a
table for later use.
0/1 knapsack using
Dynamic
Programming
28
0 - 1 Knapsack Problem
Knapsack problem.
Given n objects,Item i weighs wi > 0 and has Profit
pi > 0.Knapsack has capacity of M.
Xi = 1 if object is placed in the knapsack otherwise
Xi = 0 n
Maximize
px i i
i1
n
Subject to
wi
29
0 - 1 Knapsack Problem
Si =pair(p,w) i.e. profit and
wieght
Si1 ={(P,W)|
(P+pi+1 ,W+wi+1)}
3 -30
0/1 Knapsack - Example
Item Profit Weight
1 1 2
2 2 3
3 5 4
• No of object n=3
• Capacity of Knapsack M = 6
31
0/1 Knapsack – Example
Solution
S 0
will be obtained be adding Profit Weight of First object
={(0,0)}
S0
S01 to S0
={(1,2)}
1
S
will be obtained by merging S0
S1
1 and S01
={(0,0),(1,2)}
S 11 will be obtained be adding Profit Weight of second object to
S1
S11 ={(2,3),(3,5)}
S2 will be obtained by merging S1 and S11
S1 ={(0,0),(1,2),(2,3),(3,5)}
S 21 will be obtained be adding Profit Weight of third object to S2
S21 ={(5,4),(6,6),(7,7),(8,9)}
S3 will be obtained by merging S2 and S21
3 -32
S3 ={(0,0),(1,2),(2,3),(3,5),(5,4),(6,6),(7,7),
0/1 Knapsack – Example
Solution
Pair(7,7) and (8,9) will be deleted as it exceed
weight of
Knapsack(M=6)
Pair(3,5) will be deleted by dominance Rule
So :Spair
Last 3 ={(0,0),(1,2),(2,3),(5,4),(6,6)}
is (6,6) which is generated by X3=
S 3 , so
Now 1 from(6,6)
subtract profit and weight of third object
(6-5,6-4)=(1,2)
(1,2) is generated by S1, so X1=1
Now subtract profit and weight of third object from(1,2)
(1-1,2-2)=(0,0)
As nothing is generated by S2 so X2=0
Answer is : Total Profit is 6
object 1 and 3 is selected to put into
knapsack.
3 -33
Binomial Coefficient
using
Dynamic
Programming
34
The Binomial Coefficient
n n for 0 k
!
k k!(n n
k)!
n 1 n 1
n 0 < k <
k 1
k k n k 0 or k
1
n
D.P.35
The recursive algorithm
binomialCoef(n, k)
1. if k = 0 or k = n
2. then return 1
3. else return (binomialCoef( n -1, k -1)
+ binomialCoef(n-1, k))
D.P.36
Dynamic Solution for Binomial Coefficient
• Use a matrix C of n+1 rows, k+1 columns
where
c[n,k] =
kn
÷
• Establish a recursive property. Rewrite in terms of
matrix B:
C[ i , j ] = C[ i -1 , j -1 ] + C[ i -1, 0<j<i
j] j = 0 or j = i
1
• Solve all “smaller instances of the problem” in a
bottom-up fashion by computing the rows in B in
sequence starting with the first row.
D.P.37
The C Matrix
0 1 2 3 4 ... j k
0 1
1 1 1
2 1 2 1
3 1 3 3 1
4 1 4 6 4 1
B[i -1, j -1] B[i -1, j ]
i B[ i, j ]
n D.P.38
Example : Compute C[4,2]= 4 (2 )
• Row 0: C[0,0] =1
• Row 1: C[1,0] = 1
C[1,1] = 1
• Row 2: C[2,0] = 1
C[2,1] = C[1,0] + C[1,1] =
2 C[2,2] = 1
• Row 3: C[3,0] = 1
C[3,1] = C[2,0] + C[2,1] =
3
• Row 4: C[3,2] = C[2,1] + C[2,2] =
3
C[4,0] = 1
C[4,1] = C[3, 0] + C[3, 1] = D.P.39
4
Algorithm For Binomial coefficient
• Algo bin(n,k )
1. for i = 0 to n // every row
2. for j = 0 to minimum( i, k )
3. if j = 0 or j = i // column 0 or diagonal
4. then B[ i , j ] = 1
5. else B[ i , j ] =B[i -1, j -1] + B[i -1,
j]
6. return B[ n, k ]
D.P.40
Number of iterations
n min( i ,k ) k i n
k
i0 0 i
j 1 0 1 j 0 1
i k 1 j 0
k
n
i 0(i 1) i 1 1)
k (k
(k 2)(k 1)
2 (n k)(k 1)
(2n k 2)(k
1) 2
D.P.41
Optimal Binary Search
Tree (OBST)
using Dynamic
Programming
42
Optimal binary search trees
3 7 7 12
3 12 3 9 9
7
9 9 12 3
7
12
(a) (b) (c) (d)
7 -43
• A full binary tree may not be an optimal binary
search tree if the identifiers are
searched for with different frequency 1
• Consider these 2 2
two search trees,
3
If we search for
each identifier 4
with equal
probability (1+2+2+3+4)/5
– In first tree, the = 2.4
1
average number
of
comparisons for 2 2
successful
– asearch is 2.4. behavior.
better average 3 3
= 2.2
Optimal binary
search trees
• In evaluating binary search trees,
it is useful to add a special
square node at every place
there is a null links.
– We call these nodes
external nodes.
– We also refer to the
external nodes
as failure nodes.
– The remaining nodes
are
internal nodes.
– A binary tree with
Optimal binary search trees
• External / internal path length
– The sum of all external / internal nodes’ levels.
• For example
– Internal path length, I, is:
I=0+1+1+2+3=7 0
– External path length, E, is :
E = 2 + 2 + 4 + 4 + 3 + 2 = 17 1 1
• A binary tree with n internal 2 2 2 2
nodes are related by the formula
E = I + 2n
3 3
4 4
Optimal binary search trees
P i
i1 1
Q
i0
i
7 -47
10
• Identifiers : 4, 5, 8, 10, 11,
12, 14
5 14
• Internal node : successful
search, Pi
4 8 11 E7
• External node :
E0 E1 E2 E3 E4 12 unsuccessful search, Qi
E5 E6
i1 P i Qi (level(E i ) 1)
level(a i ) i0
The level of the root : 1
7 -48
The dynamic programming approach for
Optimal binary search trees
7 -49
Optimal binary search trees
• Example
– Let n = 4, (a1, a2, a3, a4) = (do, for, void, while).
Let (p1, p2, p3, p4) = (3, 3, 1, 1)
and (q0, q1, q2, q3, q4) = (2, 3, 1, 1, 1).
– Initially wii = qi , cii = 0, and rii = 0, 0 ≤ i ≤ 4
w01 = p1 + w00 + w11 = p1 + q1 + w00 = 8
c01 = w01 + min{c00 +c11} = 8, r01 = 1
• cii = 0
• rii = 0
• rij = l
52
Matrix-Chain multiplication
• We are given a sequence
A1A2 ...An
53
Matrix-Chain multiplication (cont.)
An example: A1A2
AA13: 10 100
A2 : 100 5
A3 : 5 50
56
Matrix-Chain multiplication (cont.)
If we multiply (( A1A2 ) A3 ) we perform 10 100 5
scalar multiplica 5000 tions to compute the 10 5 matrix
product A1A2 , plus another 10 5 50 2500 scalar
multiplica tions to multiply this matrixby A3, for a total of
If7500 scalar multiplica
we multiply ( A1( A2tions. we perform 100 5 50 25
000
scalar multiplica A 3 )) to compute the 100 50 matrix product A2 A3,
tions
plus another 10 100 50 50 000 scalar multiplica tions to
multiply A1 by this matrix, for a total of 75 000 scalar multiplica
tions.
Matrix-Chain multiplication (cont.)
• The problem:
Given a chain A , A , ..., A of n
1 2 n
matrices, where matrix Ai has dimension
pi-1x pi, fully paranthesize the product
A1A2 ...An in a way that minimizes the
number of scalar multiplications.
58
Matrix-Chain multiplication (cont.)
59
Matrix-Chain multiplication (cont.)
60
Matrix-Chain multiplication (cont.)
if i j
m[i, j] min
0
m[i, k] m[k 1, j] pi 1 pk p if i j.
ik
j}
j
61
Matrix-Chain multiplication (cont.)
63
Matrix-Chain multiplication (Contd.)
MATRIX-CHAIN-ORDER(p)
n←length[p]-1
for i←1 to n
do m[i,i]←0
for l←2 to n
do for i←1 to n-l+1
do j←i+l-1
m[i,j]← ∞
for k←i to j-1
do q←m[i,k] + m[k+1,j]+pi-1 pk pj
if
q < m[i,j]
then m[i,j] ←q
s[i,j] ←k
return m and s 64
Matrix-Chain multiplication (cont.)
An example: matrix dimension
A1 30 x 35
35 x 15
A2 15 x 5
5 x 10
A3 10 x 20
20 x 25
A4
m[2,2] m[3,5]
A p1 p2 p5 0 2500 35 15 20
5
13000
m[2,4] m[5,5] p p4 p 4375 0 35 10 20
A6m[2,3]1 5
m[2,5]
11375 min m[4,5] p1 p3 p5 2625 100
65
35 5 20 7125
Matrix-Chain multiplication (cont.)
66
Matrix-Chain multiplication (cont.)
Step 4: Constructing an optimal solution
An optimal solution can be constructed from the
computed information stored in the table s[1...n, 1...n].
m(1,6) m(i,j)
RUNNING TIME:
68