0% found this document useful (0 votes)
13 views

daa_unit-ii

The document discusses Greedy and Dynamic Programming methods for solving optimization problems. It explains the principles, algorithms, and differences between these approaches, including specific examples like the Knapsack problem, Job Sequencing with Deadlines, and Activity Selection. Additionally, it highlights the characteristics of Dynamic Programming, such as overlapping subproblems and optimal substructure.

Uploaded by

sowmyasai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views

daa_unit-ii

The document discusses Greedy and Dynamic Programming methods for solving optimization problems. It explains the principles, algorithms, and differences between these approaches, including specific examples like the Knapsack problem, Job Sequencing with Deadlines, and Activity Selection. Additionally, it highlights the characteristics of Dynamic Programming, such as overlapping subproblems and optimal substructure.

Uploaded by

sowmyasai
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 68

DA

A II
Unit-
Greedy and
Dynamic
Program
ming

1
Greedy
Method

2
Greedy Method
 Greedy Principal: are typically used to
solve optimization problem. Most of these
problems have n inputs and require us to
obtain a subset that satisfies some
constraints. Any subset that
satisfies these constraints is called a
feasible solution. We are required to find a
feasible solution that either minimizes or
maximizes a given objective function. In
the most common situation we have:
Greedy Method
 Subset Paradigm : Devise an algorithm
that works in stages
 Consider the inputs in an order based on
some selection procedure
 Use some optimization measure for
selection
procedure
 –At every stage, examine an input
to see whether it leads to an
optimal solution
 If the inclusion of input into partial
solution yields an infeasible solution,
discard the input; otherwise, add it to 3 -4
Control Abstraction for
Greedy
function greedy(C: set): set;
begin
S := Ø;
while (not solution(S) and C ≠ Ø ) do
begin
x := select(C);
C := C - {x};
if feasible(S{x})
then S := S {x};
end;
if solution(S) then return(S) else return(Ø);
end;
Control Abstraction
 Expalination
C: A set of objects;
 S: The set of objects that have
already been used;
 feasible(): A function that checks if
a set is a feasible solution;
 solution(): A function that checks if
a set
provides a solution;
 select(): A function for choosing
most next object.
 An objective function that we
are trying
to optimize. 3 -6
Greedy Vs Divide and
Conquer
Greedy Divide and Conquer
Used when need to find No optimal solution, used
optimal solution when problem have only
one solution
Does not work work parallel by dividing big
parallel problem in smaller sub problem
and running then parallel
Example: Knapsack, Example: Sorting, Searching
Activity selection

3 -7
Knapsack problem
using
Greedy Method

8
The Knapsack Problems
 The knapsack problem is a
problem of optimization: Given a
set of items n, each with a weight
w and profit p, determine the
number of each item to include in
a kanpsack such that the
total weight is less than or equal
to a given knapsack limit M and
the total Profit is maximum.
The Knapsack Problems
 The Integer Knapsack
Problem
Maximiz
e 
n pi x i
i1
n
Subject
to  wx
i1
i i ≤

 The 0-1 Knapsack M Problem: same as integer


knapsack except that the values of xi's are
restricted to 0 or 1.
 The Fractional Knapsack Problem : same as integer
knapsack except that the values of xi's are between
0 and 1.
The knapsack algorithm
 The greedy algorithm:
Step 1: Sort pi/wi into nonincreasing order.
Step 2: Put the objects into the knapsack
according to the sorted sequence as
possible as we can.
 e. g. n = 3, M = 20, (p1, p2, p3) = (25, 24, 15)
(w1, w2, w3) = (18, 15, 10)
Sol: p1/w1 = 25/18 =
1.39 p2/w2 =
24/15 = 1.6 p3/w3
= 15/10 = 1.5
Optimal solution: x1 = 0, 3 -11
Job sequencing
using
Greedy Method

12
JOB SEQUENCING WITH
DEADLINES
The problem is stated as below.
 There are n jobs to be processed on a
machine.
 Each job i has a deadline di≥ 0 and profit
pi≥0 .
 Pi is earned iff the job is completed
by its
 Onl one machine is
deadline. available for
 y
The processing
job is completed if it is
jobs
processed ona machine for unit time.
.machin
one job is processed at a time
 e.
Onl on the
y 13
JOB SEQUENCING
WITH
DEADLINES (Contd..)
 A solution is subset jobs J
feasible a of
such that each job is complet by its
deadline.
ed
 An optimal solution is a feasible

solution with maximum profit


value.
Example : Let n = 4,
(p1,p2,p3,p4)=
(100,10,15,27), 14
JOB SEQUENCING
WITH
Sr.No DEADLINES
Feasible Processing(Contd..)
Profit value
. Solution Sequence
(i) (1,2) (2,1) 110
(ii) (1,3) (1,3) or (3,1) 115
(iii) (1,4) (4,1) 127 is the optimal
one
(iv) (2,3) (2,3) 25
(v) (3,4) (4,3) 42
(vi) (1) (1) 100
(vii) (2) (2) 10
(viii) (3) (3) 15
(ix) (4) (4) 27

15
GREEDY ALGORITHM TO OBTAIN AN
OPTIMAL SOLUTION for Job
Scheduling
 Consider the jobs in the decreasing
order of profits subject to the
constraint that the resulting job
sequence J is a feasible solution.
 In the example
considered before, the
decreasing
(100 27 15profit vector
(2 is1 2
10) 1)
p1 p4 p3 d1 d4 d3
p2 d2
16
GREEDY ALGORITHM TO OBTAIN AN
OPTIMAL SOLUTION (Contd..)

J = { 1} is a feasible one
J = { 1, 4} is a feasible one with
processing
sequence ( 4,1)
J = { 1, 3, 4} is not feasible
J = { 1, 2, 4} is not feasible
J = { 1, 4} is optimal

17
Activity Selection
Using
Greedy Method

18
The activity selection

problem
Problem: n activities, S = {1, 2, …, n},
each activity i has a start time si and a
finish time fi, si  fi.
 Activity i occupies time interval [si, fi].
 i and j are compatible if si  fj or sj  fi.
 The problem is to select a maximum-
size set of mutually compatible
activities

3 -19
Example:
i 1 2 3 4 5 6 7 8 9 10 11
si 1 3 0 5 3 5 6 8 8 2 12
fi 4 5 6 7 8 9 10 11 12 13 14

The solution set = {1, 4, 8,


11} Algorithm:
Step 1: Sort fi into
nondecreasing order. After
sorting, f1
 f2  f3  …  fn.
Step 2: Add the next activity i to the solution
set if i is compatible with each in the
solution set.
Step 3: Stop if all activities are examined.
Otherwise, go to step 2. 3 -20
Solution of the example:
i si fi accept
1 1 4 Yes
2 3 5 No
3 0 6 No
4 5 7 Yes
5 3 8 No
7 6 10 No
8 8 11 Yes
9 8 12 No
10 2 13 No
11 12 14 Yes
Solution = {1, 4, 8,
11} 3 -21
Dynamic
programming

22
Dynamic Programming
 Principal: Dynamic Programming is an
algorithmic paradigm that solves a given complex
problem by breaking it into subproblems and stores
the results of subproblems to avoid computing the
same results again.
 Following are the two main properties of a
problem that suggest that the given problem can be
solved using Dynamic programming.
1) Overlapping Subproblems
2) Optimal Substructure
Dynamic Programming
 1) Overlapping Subproblems: Dynamic
Programming is mainly used when solutions of
same subproblems are needed again and
again. In dynamic programming, computed
solutions to subproblems are stored in a table
so that these don’t have to recomputed.

 2) Optimal Substructure: A given problems


has Optimal Substructure Property if optimal
solution of the given problem can be obtained
by using optimal solutions of its subproblems.

3 -24
The principle of optimality
 Dynamic programming is a
technique
for finding an optimal solution

 The principle of optimality applies


if the optimal solution to a
problem always contains optimal
solutions to all subproblems

25
Differences between
Greedy, D&C and
Dynamic
 Greedy. Build up a solution

incrementally,
myopically optimizing some local
criterion.
 Divide-and-conquer. Break up a problem
into two sub-problems, solve each sub-
problem independently, and combine solution
to sub-problems to form solution to original
problem.
 Dynamic programming. Break up a problem
into a series of overlapping sub-problems, 26
Divide and conquer
Vs Dynamic
 Programming
Divide-and-Conquer: a top-down
approach.
 Many smaller instances are
computed more than once.

 Dynamic programming: a
bottom-up approach.
 Solutions for smaller
instances are stored in a
table for later use.
0/1 knapsack using
Dynamic
Programming

28
0 - 1 Knapsack Problem
Knapsack problem.
 Given n objects,Item i weighs wi > 0 and has Profit
pi > 0.Knapsack has capacity of M.
 Xi = 1 if object is placed in the knapsack otherwise
Xi = 0 n
Maximize
px i i
i1
n
Subject to

wi
29
0 - 1 Knapsack Problem
 Si =pair(p,w) i.e. profit and
wieght
 Si1 ={(P,W)|

(P+pi+1 ,W+wi+1)}

3 -30
0/1 Knapsack - Example
Item Profit Weight
1 1 2
2 2 3
3 5 4

• No of object n=3
• Capacity of Knapsack M = 6

31
0/1 Knapsack – Example
Solution
S 0
will be obtained be adding Profit Weight of First object
={(0,0)}

S0
 S01 to S0
={(1,2)}
1

S
will be obtained by merging S0

 S1
1 and S01
={(0,0),(1,2)}

S 11 will be obtained be adding Profit Weight of second object to

S1
 S11 ={(2,3),(3,5)}
 S2 will be obtained by merging S1 and S11
 S1 ={(0,0),(1,2),(2,3),(3,5)}
S 21 will be obtained be adding Profit Weight of third object to S2

 S21 ={(5,4),(6,6),(7,7),(8,9)}
 S3 will be obtained by merging S2 and S21
3 -32
 S3 ={(0,0),(1,2),(2,3),(3,5),(5,4),(6,6),(7,7),
0/1 Knapsack – Example
Solution
Pair(7,7) and (8,9) will be deleted as it exceed
weight of
Knapsack(M=6)
 Pair(3,5) will be deleted by dominance Rule

 So :Spair
Last 3 ={(0,0),(1,2),(2,3),(5,4),(6,6)}
is (6,6) which is generated by X3=
 S 3 , so
Now 1 from(6,6)
subtract profit and weight of third object
 (6-5,6-4)=(1,2)
 (1,2) is generated by S1, so X1=1
 Now subtract profit and weight of third object from(1,2)
 (1-1,2-2)=(0,0)
 As nothing is generated by S2 so X2=0
 Answer is : Total Profit is 6
 object 1 and 3 is selected to put into
knapsack.
3 -33
Binomial Coefficient
using
Dynamic
Programming

34
The Binomial Coefficient

 n n for 0  k 
!
 k k!(n n
k)!

 n 1   n 1 
 n     0 < k <
   k 1  
 k k   n k  0 or k 
1 
 n
D.P.35
The recursive algorithm

binomialCoef(n, k)
1. if k = 0 or k = n
2. then return 1
3. else return (binomialCoef( n -1, k -1)
+ binomialCoef(n-1, k))

D.P.36
Dynamic Solution for Binomial Coefficient
• Use a matrix C of n+1 rows, k+1 columns
where
c[n,k] = 
 kn 
÷
• Establish a recursive property. Rewrite in terms of
matrix B:
C[ i , j ] = C[ i -1 , j -1 ] + C[ i -1, 0<j<i
j] j = 0 or j = i
1
• Solve all “smaller instances of the problem” in a
bottom-up fashion by computing the rows in B in
sequence starting with the first row.

D.P.37
The C Matrix

0 1 2 3 4 ... j k
0 1
1 1 1
2 1 2 1
3 1 3 3 1
4 1 4 6 4 1
B[i -1, j -1] B[i -1, j ]

i B[ i, j ]

n D.P.38
Example : Compute C[4,2]= 4 (2 )
• Row 0: C[0,0] =1
• Row 1: C[1,0] = 1
C[1,1] = 1
• Row 2: C[2,0] = 1
C[2,1] = C[1,0] + C[1,1] =
2 C[2,2] = 1
• Row 3: C[3,0] = 1
C[3,1] = C[2,0] + C[2,1] =
3
• Row 4: C[3,2] = C[2,1] + C[2,2] =
3
C[4,0] = 1
C[4,1] = C[3, 0] + C[3, 1] = D.P.39
4
Algorithm For Binomial coefficient
• Algo bin(n,k )
1. for i = 0 to n // every row
2. for j = 0 to minimum( i, k )
3. if j = 0 or j = i // column 0 or diagonal
4. then B[ i , j ] = 1
5. else B[ i , j ] =B[i -1, j -1] + B[i -1,
j]
6. return B[ n, k ]

D.P.40
Number of iterations

n min( i ,k ) k i n
k

i0  0 i
j 1 0 1 j 0  1
i  k 1 j  0
k
n


i  0(i  1) i  1  1)
 k (k


(k  2)(k  1)
2  (n  k)(k  1) 
(2n  k  2)(k 
1) 2
D.P.41
Optimal Binary Search
Tree (OBST)
using Dynamic
Programming

42
Optimal binary search trees

• Example: binary search trees for 3, 7, 9, 12;

3 7 7 12

3 12 3 9 9
7

9 9 12 3

7
12
(a) (b) (c) (d)

7 -43
• A full binary tree may not be an optimal binary
search tree if the identifiers are
searched for with different frequency 1

• Consider these 2 2
two search trees,
3
If we search for
each identifier 4
with equal
probability (1+2+2+3+4)/5
– In first tree, the = 2.4
1
average number
of
comparisons for 2 2
successful
– asearch is 2.4. behavior.
better average 3 3
= 2.2
Optimal binary
search trees
• In evaluating binary search trees,
it is useful to add a special
square node at every place
there is a null links.
– We call these nodes
external nodes.
– We also refer to the
external nodes
as failure nodes.
– The remaining nodes
are
internal nodes.
– A binary tree with
Optimal binary search trees
• External / internal path length
– The sum of all external / internal nodes’ levels.
• For example
– Internal path length, I, is:
I=0+1+1+2+3=7 0
– External path length, E, is :
E = 2 + 2 + 4 + 4 + 3 + 2 = 17 1 1
• A binary tree with n internal 2 2 2 2
nodes are related by the formula
E = I + 2n
3 3

4 4
Optimal binary search trees

• n identifiers are given.


Pi, 1in : Successful Probability
Qi, 0in : Unsuccessful
Probability Where,
n n

P i  
i1 1
Q
i0
i

7 -47
10
• Identifiers : 4, 5, 8, 10, 11,
12, 14
5 14
• Internal node : successful
search, Pi
4 8 11 E7
• External node :
E0 E1 E2 E3 E4 12 unsuccessful search, Qi

E5 E6

The expected cost of a binary tree:



n n


i1 P i  Qi  (level(E i ) 1)
 level(a i )  i0
 The level of the root : 1
7 -48
The dynamic programming approach for
Optimal binary search trees

To solve OBST , requires to find answer for


Weight (w),Cost (c) and Root (r) by:

W(i,j) =p(j) +q(j) +w(i,j-1)

C(i,j)= min {c(i,k-1)+c(k,j)}+w(i,j) ….. i<k<=j

r = k (for which value of c(i,j) is small

7 -49
Optimal binary search trees
• Example
– Let n = 4, (a1, a2, a3, a4) = (do, for, void, while).
Let (p1, p2, p3, p4) = (3, 3, 1, 1)
and (q0, q1, q2, q3, q4) = (2, 3, 1, 1, 1).
– Initially wii = qi , cii = 0, and rii = 0, 0 ≤ i ≤ 4
w01 = p1 + w00 + w11 = p1 + q1 + w00 = 8
c01 = w01 + min{c00 +c11} = 8, r01 = 1

w12 = p2 + w11 + w22 = p2 +q2 +w11 = 7


c12 = w12 + min{c11 +c22} = 7, r12 = 2

w23 = p3 + w22 + w33 = p3 +q3 +w22 = 3


c23 = w23 + min{c22 +c33} = 3, r23 = 3

w34 = p4 + w33 + w44 = p4 +q4 +w33 = 3


Optimal binary search trees
• wii = qi (a1, a2, a3, a4) =
(do,for,void,while)
• wij = pk + wi,k-1 + wkj (p1, p2, p3, p4) = (3, 3, 1, 1)
• cij = wij + (q0, q1, q2, q3, q4) = (2, 3, 1, 1, 1)

• cii = 0
• rii = 0
• rij = l

1 3 Computation is carried out


row-wise from row 0 to row
4 4
The optimal search tree as the result
Chain Matrix
Multiplication
using Dynamic
Programming

52
Matrix-Chain multiplication
• We are given a sequence

A1, A2, ..., An

• And we wish to compute

A1A2 ...An

53
Matrix-Chain multiplication (cont.)

• Matrix multiplication is assosiative, and so all


parenthesizations yield the same product.
• For example, if the chain of matrices is A1A2 ...A4
then the product A1A2 A3 can be fully
paranthesized in fiveA distinct way:
4
( A1( A2 ( A3 A4 )))
( A1(( A2 A3 ) A4 ))
(( A1A2 )( A3 A4 ))
(( A1( A2 A3 )) A4 )
54
((( A1A2 ) A3 ) A4 )
Matrix-Chain multiplication
MATRIX-MULTIPLY (A,B)
if columns [A] ≠ rows [B]
then error “incompatible dimensions”
else for i←1 to rows [A]
do for j←1 to columns [B]
do C[i, j]←0
for k←1 to columns [A]
do
C[ i, j ]← C[ i, j] +A[ i,
k]*B[ k, j]
Matrix-Chain multiplication (cont.)
Cost of the matrix multiplication:

An example: A1A2
AA13: 10 100
A2 : 100  5
A3 : 5  50

56
Matrix-Chain multiplication (cont.)
If we multiply (( A1A2 ) A3 ) we perform 10 100  5 
scalar multiplica 5000 tions to compute the 10  5 matrix
product A1A2 , plus another 10  5  50  2500 scalar
multiplica tions to multiply this matrixby A3, for a total of
If7500 scalar multiplica
we multiply ( A1( A2tions. we perform 100  5  50  25
000
scalar multiplica A 3 )) to compute the 100  50 matrix product A2 A3,
tions
plus another 10 100  50  50 000 scalar multiplica tions to
multiply A1 by this matrix, for a total of 75 000 scalar multiplica
tions.
Matrix-Chain multiplication (cont.)

• The problem:
Given a chain A , A , ..., A of n
1 2 n
matrices, where matrix Ai has dimension
pi-1x pi, fully paranthesize the product
A1A2 ...An in a way that minimizes the
number of scalar multiplications.

58
Matrix-Chain multiplication (cont.)

Step 1: The structure of an optimal


paranthesization(op)

• Find the optimal substructure and then use it to


construct an optimal solution to the problem from
optimal solutions to subproblems.
• Let Ai...j, denote the matrix product Ai Ai+1 ... Aj
• Any parenthesization of Ai Ai+1 ... Aj must split
the product between Ak and Ak+1 for i ≤ k < j.

59
Matrix-Chain multiplication (cont.)

Step 2: A recursive solution:


• Let m[i,j] be the minimum number of scalar
multiplications needed to compute the matrix Ai...j
• Thus, the cost of a cheapest way to compute
A1...n
would be m[1,n].

60
Matrix-Chain multiplication (cont.)

Recursive defination for the minimum cost of


paranthesization:

if i  j


m[i, j]   min
0
m[i, k]  m[k 1, j]  pi 1 pk p if i  j.
ik
j}
j

61
Matrix-Chain multiplication (cont.)

That is s[ i,j] equals a value k such that

m[i, j]  m[i, k]  m[k 1, j]  pi1 pk


pj
s[i, j]  k
62
Matrix-Chain multiplication (cont.)

Step 3: Computing the optimal costs

It is easy to write a recursive algorithm based on recurrence


for computing m[i,j].

We compute the optimal cost by using a tabular, bottom-up


approach.

63
Matrix-Chain multiplication (Contd.)
MATRIX-CHAIN-ORDER(p)
n←length[p]-1
for i←1 to n
do m[i,i]←0
for l←2 to n
do for i←1 to n-l+1
do j←i+l-1
m[i,j]← ∞
for k←i to j-1
do q←m[i,k] + m[k+1,j]+pi-1 pk pj
if
q < m[i,j]
then m[i,j] ←q
s[i,j] ←k
return m and s 64
Matrix-Chain multiplication (cont.)
An example: matrix dimension
A1 30 x 35
35 x 15
A2 15 x 5
5 x 10
A3 10 x 20
20 x 25
A4

m[2,2]  m[3,5]
A  p1 p2 p5  0  2500  35 15  20
5
 13000
m[2,4]  m[5,5]  p p4 p  4375  0  35 10  20 
 A6m[2,3]1 5
m[2,5]
11375 min  m[4,5]  p1 p3 p5  2625  100 
 65
35  5  20  7125
Matrix-Chain multiplication (cont.)

66
Matrix-Chain multiplication (cont.)
Step 4: Constructing an optimal solution
An optimal solution can be constructed from the
computed information stored in the table s[1...n, 1...n].

m(1,6) m(i,j)

m(i,k) k=3 m(k+1,j)


m(1,3) m(4,6)
k=5
k=1
m(1,1) m(2,3) m(4,5) m(6,6)
k=2 k=4
m(2,2) m(3,3) m(4,4) m(5,5)
67
Final Answer :((A1(A2*A3)) ((A4*A5)A6))
Matrix-Chain multiplication (Contd.)

RUNNING TIME:

Matrix-chain order yields a running time of O(n3)

68

You might also like