0% found this document useful (0 votes)
42 views111 pages

Daa - Unit Ii 2020

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views111 pages

Daa - Unit Ii 2020

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 111

Greedy Algorithmic Strategies

Greed is good.
(Some of the time)
Refrence Horowitz & Sahani

01/02/2025 Design and Analysis of Algorithms


Introduction
• Concepts
– Choosing the best possible choice at each step.
– This decision leads to the best over all solution.

• Greedy algorithms do not always yield optimal


solutions.

01/02/2025 Design and Analysis of Algorithms 2


Greedy strategy :
Greedy method is perhaps the most straightforward design technique for
solving problems, in which optimum (min. or max.) solution is desired from the
given input.
Characteristics :
• There are ‘n’ inputs
• Objective is to find out a subset that satisfies some constraints
• Any subset that satisfies these constraints is called a feasible solution
• We need to find out a feasible solution that either minimizes or maximizes a
given objective function
• There is usually an obvious way to determine a feasible solution but not
necessarily an optimal solution
Examples : Knapsack problem, Job sequencing with deadlines, Optimal
Merge Patterns, Minimum Spanning Trees (Prim and Kruskal) and Shortest
path problem (Dijkstra’s algorithm).
Greedy Algorithms
• Find the best way to solve the problem at hand
• This solution leads to the best overall solution
• Examples
– Dijkstra’s algorithm for shortest path with only non-negative edge
– Fractional knapsack Problem
– Job sequencing with Deadlines
– Optimal Merge Patters
– Minimal Spanning Trees
Note: It chooses the largest value, without worrying about the future decisions &
returns its value forever. This algorithm fail sometimes. E.g. Coin Problem Rs.5 =
2 + 2 + 1.

01/02/2025 Design and Analysis of Algorithms 4


Greedy strategy : Algorithm Design
• Greedy algorithm works in stages, considering one input at a time.
• At each stage a decision is made regarding whether a particular input is a
part of an optimal solution (Partial Optimal Solution).
• Selection procedure is used to consider a particular input , and if the
inclusion of selected input into the partially constructed optimal solution
will result in feasible solution then this input is included in the partial
solution, otherwise it is not included into partially constructed solution.
• Selection procedure itself is based on some optimization measure and this
measure may be the objective function.
Outline
• Elements of greedy algorithm
– Greedy Statergy
– Principal of Optimality
– Knapsack Problem
– Job Scheduling with Deadlines
– Optimal Merge Patterns
– Huffman coding
• Dynamic Programming
– 0/1 Knapsack Problem
– OBST
– Multistage Graph

01/02/2025 Design and Analysis of Algorithms 6


Greedy-choice property

• A globally optimal solution is derived from a locally optimal


(greedy) choice.

• When choices are considered, the choice that looks best in


the current problem is chosen, without considering results
from subproblems.

01/02/2025 Design and Analysis of Algorithms 7


Optimal substructures

• A problem has optimal substructure if an optimal solution to


the problem is composed of optimal solutions to
subproblems.

• This property is important for both greedy algorithms and


dynamic programming.

01/02/2025 Design and Analysis of Algorithms 8


Steps in Design of Greedy Algorithms

• Determine the optimal substructure of the problem.


• Develop a recursive solution.
• Prove that at any stage of the recursion, one of the optimal choices is the greedy
choice. Thus, it is always safe to make the greedy choice.
• Show that all but one of the subproblems induced by having made the greedy
choice are empty.
• Develop a recursive algorithm that implements the greedy strategy.
• Convert the recursive algorithm to an iterative algorithm.

01/02/2025 Design and Analysis of Algorithms 9


Shortcuts Design
• Form the optimization problem so that, after a choice is made and there is only
one sub problem left to be solved.
• Prove that there is always an optimal solution to the original problem that makes
the greedy choice, so that the greedy choice is always safe.

• Demonstrate that, having made the greedy choice, what remains is a subproblem
with the property that if we combine an optimal solution to the subproblem with
the greedy choice we have made, we arrive at an optimal solution to the original
problem.

01/02/2025 Design and Analysis of Algorithms 10


Greedy Procedure

Procedure Greedy (Solution , a , n) // where a is


array of inputs
{ solution Ф // n is size of array
for(i=1 to n)
{
x = select (a)
if (feasible (x , solution))
Add (x , solution);
}
return solution;
}

Note: For any problem only on optimal solution is


possible.

01/02/2025 Design and Analysis of Algorithms 11


Knapsack Problem(Fractional)

• Problem definition
• Given n objects and a knapsack where object i has a weight wi
and the knapsack has a capacity m
• If a fraction xi of object i placed into knapsack, a profit
pixi is earned
• The objective is to obtain a filling of knapsack maximizing the total profit
• Problem formulation (Formula 4.1-4.3)
• A feasible solution is any set satisfying (4.2) and (4.3)
• An optimal solution is a feasible solution for which (4.1) is maximized

maximize ∑ pi x i ( 4 . 1)
1≤i≤n
subject to ∑ w i x i ≤m ( 4 . 2)
1≤i ≤n
and 0≤ xi ≤1, 1≤i ≤n ( 4 .3 )

01/02/2025 Design and Analysis of Algorithms


m-capacity of bag-50kg
N-{i1,i2,i3...in}
40kg-platinum-i1
10/23kg-gold-i2
xi2 fraction pi2*xi2
wi-weight of any ith object
pi- profit earned by ith obj
Capacity is m=10 Kg

15 Kg Gold object i
15 Kg Silver object i1
2 Kg of Iron(2 Object which is Attractive) object i2
xi2

10 kg
9kg+1kg
9 kg+1kg

n objects object i such that i is belongs to n


wi is wt of ith object
Knapsack Problem
• Example
• n=3 , m=20, (p1,p2,p3)=(25,24,15), (w1,w2,w3)=(18,15,10)

(x 1 ,x 2 ,x 3 ) ∑ w i xi ∑ p i xi
• 1. (1/2, 1/3, 1/4) 16.5 24.25
• 2. (1, 2/15, 0) 20 28.2
• 3. (0, 2/3, 1) 20 31
• 4. (0, 1, 1/2) 20 31.5
• Lemma 4.1

• In case the sum of all the weights is ≤ m, then xi = 1, 1 ≤ i ≤


n is an optimal solution.
• Lemma 4.2
• All optimal solutions will fill the knapsack exactly. Knapsack problem
fits the subset paradigm

Refrence Horowitz & Sahani


n=3 , m=20, (p1,p2,p3)=(25,24,15), (w1,w2,w3)=(18,15,10)

M=20

X1=18 P1=25
x2= 2/15 p2x2=24*2/15
x2=15 P2=24
X3=10 P3=15
p1x1 = 25
p2x2= 24*2/15
28.2
p1x1=0
p2x2=24*2/3=16
p3x3=15
31

p2x2=24
p3x3=1/2*15=7.5
31.5
Knapsack Problem

• E.g. If we are given n objects & a knapsack. Object i has a


weight wi & the knapsack has a capacity m. If a fraction xi, o<=x<=1,

of object i is placed into the knapsack then profit pixi is earned.


Devise a algorithm to fill knapsack in a way so that the profit earned is
maximum.

Sol: We know that greedy knapsack method using profit as its measures will at each step
choose an object that increases the profit most.

Let p[1:n] & w[1:n] contain profit & weight respectively.

N numberof objects such that p(i)/w(i)>= p(i+1)/w(i+1)

M------Capacity of bag & x[1:n] is solution vector.

01/02/2025 Design and Analysis of Algorithms 17


Knapsack Algorithm

Algorithm Greedy Knapsack(m,n)


{
for i=1 to n do x[i] = 0.0 ; //
Initialization x
U= m;
for i=1 to n do
{
If (w[i] > U) then break;
x[i] = 1.0;
U=U – w[i];
}
If (i<=n) then x[i] = U/w[i];
}

01/02/2025 Design and Analysis of Algorithms 18


Greedy strategy : Knapsack Problem

Algorithm knapsack (w, p, n, m)


// w and p are arrays of size ‘n’ containing weights and profits of
// each object respectively. ‘m’ is the capacity of the knapsack.
// Objects are sorted in non-increasing order of pi / wi , 1 ≤ i ≤ n.
{ for i = 1 to n do
x[i] = 0;
weight = 0;
while weight < m do // greedy loop
{ i = the best remaining objects // selection criteria
if (weight + w[i]) ≤ m
{ x[i] = 1;
weight = weight + w[i];}
else {x[i] = (m – weight) / w[i]; weight = m;}
}
return x; // x is an array of size ‘n’
}
Greedy strategy : Knapsack Problem
Selection Criteria for selecting the best from remaining objects
i. Most profitable object from the remaining objects
ii. Lightest object from the remaining objects – slow filling of knapsack so that
maximum number of objects get selected
iii. Highest profit / unit weight object from the remaining objects
For example, let n = 5 and m = 100

OBJECTS GREEDY BY
i wi pi pi / wi Profit weight pi / wi
1 10 20 2.0 1 1
2 20 30 1.5 1 1
3 30 66 2.2 1 1 1
4 40 40 1.0 0.5 1
5 50 60 1.2 1 0.8
TOTAL WEIGHT 100 100 100
TOTAL PROFIT EARNED 146 156 164
Greedy strategy : Knapsack Problem
Selection Criteria for selecting the best remaining objects … contd.
“If objects are selected in non-increasing order of pi / wi , then
algorithm knapsack finds an optimal solution”.

Analysis of Algorithm knapsack :


1. To sort the objects in non-increasing order of pi / wi ,
computation time = O(nlogn)
2. While loop will be executed ‘n’ times in worst case, so the computation time =
O(n)
3. Time complexity = O(n*logn)
4. Space complexity = c + space required for x[1:n] = O(n)
Knapsack Algorithm
Real Knapsack Problem:
• O(nlog2n)
• Solution space ∞

Digital or 0/1:
• O(2n)
• Solution space 2n
• Note: Time Complexity of real knapsack is linear logarithmic &
that of digital i.e. 0/1 knapsack is 2n. Therefore real knapsack
is used over 0/1.
• Digital knapsack i.e. 0/1 knapsack is NP-Hard.

01/02/2025 Design and Analysis of Algorithms 22


Knapsack Algorithm
E.g. n=7, w1……w7 = 2,3,5,7,1,4,1
p1…….p7 = 10,5,15,7,6,18,3 M=15 Capacity

Note: Give priority to weights having greater p/w ratio.


P1/w1 = 10/2 = 5, p2/w2 = 5/3 = 1.66
P3/w3 = 15/5 = 3, p4/w4 = 7/7 = 1
P5/w5 = 6/1 = 6, p6/w6 = 18/4 = 4.5
P7/w7 = 3/1 = 3.
M=w5+w1+w6+w3+w2 1+2+4+5+1

P= 6+10+18+15+3+2*(5/3) = 55.33
Optimal solution: (1,2/3,0,1,1,1,1)
Note:
if p1/w1 >= p2/w2 >= p3/w3…>= pn/wn .i.e. inputs are already in this form then no
need of sorting.
Therefore best time complexity = n.
If inputs are not sorted as stated above then,
Time complexity = nlog2n
Therefore complexity of Greedy knapsack = nlogn
01/02/2025 Design and Analysis of Algorithms 23
Job Sequencing with Deadlines Page No :- 228

• Example
• n=4, (p1,p2,p3,p4)=(100,10,15,27),
(d1,d2,d3,d4)=(2,1,2,1)

Feasible processing
Solution sequence value

1. (1, 2) 2, 1 110
2. (1, 3) 1, 3 or 3, 1 115
3. (1, 4) 4, 1 127
4. (2, 3) 2, 3 25
5. (3, 4) 4, 3 42
6. (1) 1 100
7. (2) 2 10
8. (3) 3 15
9. (4) 4 27
01/02/2025 Design and Analysis of Algorithms
P2

P1
Job Sequencing

1) Consider n=4, J1,J2,J3 & J4 with profits P1,P2,P3 & P4 =


100,15,10 & 27 respectively & deadlines d1,d2,d3 & d4 =
2,1,2,1.

Max. deadline = 2
J4 J1

0 1 2
J1 & J3 are having same deadline i.e. 2. so therefore the
J={1,3,4} wont find a Optimal solution select one of them having
greater profit & put it from right in box.
P1= 100 & P3 = 10
Therefore P1 i.e. Job J1. Now select next job with higher profit.
Ie. Job J4. Therefore Max Profit = 100 + 27 = 127.

2) Consider n=5 d1,d2,d3,d4,d5 = 3,3,3,4,4


p1,p2,p3,p4,p5 = 10,20,15,5,80
Max. Profit: 80+20+15+10=125

J1 J3 J2 J5
0 1 2 3 4
01/02/2025 Design and Analysis of Algorithms 26
Job Sequencing with Deadlines

• Greedy strategy using total profit as optimization function


• Applying to Example 4.2
• Begin with J= Φ
• Job 1 considered, and added to J | J={1}
• Job 4 considered, and added to J | J={1,4}
• Job 3 considered, but discarded because not feasible J={1,4}
• Job 2 considered, but discarded because not feasible J={1,4}
• Final solution is J={1,4} with total profit 127
• It is optimal
• How to determine the feasibility of J ?
• Trying out all the permutations
• Computational explosion since there are n! permutations
• Possible by checking only one permutation

• Theorem---Let J be a set of k jobs and a permutation of jobs in J such that Then J is a feasible solution
iff the jobs in J can be processed in the order without violating any deadline.
σ

d i1 ≤d i2 ≤. ..≤d ik . σ=i 1 ,i 2 , .. . ,i k

01/02/2025 Design and Analysis of Algorithms


Optimal Merge Patterns

• Two sorted files containing n & m records respectively could be merged together
to obtain one sorted file in time O(n+m).
• When more than two sorted files are to be merged together, the merge can be
accomplished by repeatedly merging sorted files in pairs.
• Thus if x1,x2,x3 & x4 are to be merged then we could first merge x1 & x2 to
get file y1, then y1 & x3 to get y2 & finally y2 & x4 to get the desired sorted file.
• We could also merge y1 & y2 & get desired sorted file.
• Thus for n sorted files, different pairings require differing amounts of computing
time.
• Thus the problem we address here is that of determining an optimal way (one
requiring the fewest comparisons) to pairwise merge n sorted files.

01/02/2025 Design and Analysis of Algorithms 28


Optimal Merge Patterns

E.g. Files x1,x2 & x3 are three sorted files of length 30,20 & 10 records each.
• Merging x1 & x2 requires 50 record moves.
• Merging the result with x3 requires another 60 moves.
• The total no. of record moves required to merge the three files is 110.
• Thus if we first merge x2 & x3 (taking 30 moves) & then x1 (taking 60 moves), the total record
moves made is only 90. Hence, the second merge pattern is faster than the first.
• Since merging an n-record file & m-record file requires possibly n+m record moves, the obvious
choice for a selection criteria is at each step merge the two smallest size files together.

01/02/2025 Design and Analysis of Algorithms 29


Optimal Merge Patterns

55

25 30

10 15

1. x= 10 , y= 30 , z= 15
2. Sum of internal nodes:
OMP= 25+55 = 80

01/02/2025 Design and Analysis of Algorithms 30


Optimal Merge Patterns

• Example

01/02/2025 Design and Analysis of Algorithms


Optimal Merge Patterns
• Time
• If list is kept in nondecreasing order: O(n2)
• If list is represented as a minheap / maxheap: O(n log n)

01/02/2025 Design and Analysis of Algorithms


Unit - II
Dynamic Programming
OS1 , OS2 ,OS3

S--->A+A--->B
source
A---->M
A---->N
A--->B
C
D
T Target
Dynamic Programming

• Principle of Dynamic Programming


• Control abstraction and its time analysis
• Binomial Coefficients
• General Strategy – Principle of Optimality
• Implementation of Optimal Binary Search Tree (OBST)
• Implementation of 0/1 Knapsack Problem
• Chain Matrix Multiplication
• Implementation of Traveling Salesperson Problem (TSP)
Dynamic programming
• It is an algorithm design method that can be used when the
solution to a problem can be viewed as the result of a sequence
of decisions & principle of optimality.
• Divide a problem of big instance to sub-problems of smaller
instance
• Find the best solution for each sub-problem
• Find the best way to combine different combinations of sub-
problems to solve the whole problem.
• Examples
• Multistage Graph
• 0/1 knapsack
• Implementation of OBST
• Traveling Salesperson Problem
Dynamic Programming
• Dynamic Programming (DP) is quite simple.

• It avoids calculating same thing more than once and it saves known results in a table which
gets populated as sub-instances are solved.

• D&C is a top-down method of dividing the instance into sub-instances and solving them
independently.

• DP on the other hand is a bottom-up technique. Which usually starts with the smallest, and
hence the simplest, sub-instances.

• Combining their solutions (usually solutions of previous sub-instances) we obtain solutions


to sub-instances of increasing size. This process is continued till we arrive at the solution of
the original instance.

• In DP we get the optimal solutions to sub-instances (Optimal Sub-structure) which are


used into sub-instances of increasing size . Optimal Sub-structure is one of the element of
DP.

• In Greedy approach, at a time one element from the input is included in the solution which
always finds a place in final solution whereas in DP, solutions to earlier sub-instances may
not find a place in final solution.
Dynamic programming
Control Abstraction:
DP is usually applied to optimization problems to obtain optimal
solution to the given problem from the number of solutions available.

Steps involved in Dynamic programming:


• Develop a mathematical function that gives solution or subsolutions
for the problem.
• Test the principle of optimality.
• Design the recurrence relation in such a way that solution and sub
solutions are co-related with each other.
• Write down the algorithm to compute recurrence relation.
Greedy Algorithm v.s. Dynamic Programming
Greedy algorithm Dynamic programming
• The best choice is made at each • A choice is made at each step.
step and after that the subproblem
is solved.
• The choice made at each step
• The choice made by a greedy usually depends on the solutions
algorithm may depend on choices to subproblems.
so far, but it cannot depend on any
future choices or on the solutions
• Dynamic-programming
to subproblems.
problems are often solved in a
• A greedy strategy usually bottom-up manner.
progresses in a top-down fashion,
• Always provides an optimal
making one greedy choice after
solution using principle of
another, reducing each given
optimality.
problem instance to a smaller one.
• Provides set of feasible solutions
Principle of Optimality
• DP obtains solution using principle of optimality.
• It states that in an optimal sequence of decisions, each subsequence must also
be optimal.
• It does not apply to every problem we encounter & thus we can’t solve problem
given by dynamic programming.
• The optimal solution to any instance of problem is combination of optimal
solution to some of subinstances.
• E.g. 1. Shortest path/route from mumbai to pune via kalyan.
2. Longest simple route between 2 cities using given set of
roads mumbai to pune via nasik.
Elements of Dynamic Programming
• Optimal substructure
• An optimal solution to a problem is created
from optimal solutions of subproblems.

• Overlapping subproblems
• The optimal solution to a subproblem is
reused in solving more than one problem.
• The number of subproblems is usually a
polynomial of the input size.
Applications of Dynamic Programming:

• Optimal Binary Search Trees


• 0/1 Knapsack Problem
• Matrix Chain Multiplication
• Traveling Salesperson Problem
• Binomial coefficient
Optimal Binary Search Tree

Dynamic Programming
Hierarchical Data Structures
One-to-many relationship between elements
Tree
Single parent, multiple children
Binary tree
Tree with 0–2 children per node

Tree Binary Tree


Binary Trees Properties
Degenerate Balanced
Height = O(n) for n Height = O( log(n) ) for
nodes n nodes
Similar to linear list Useful for searches

Degenerate Balanced
binary tree binary tree
Binary Search Trees
Key property
Value at node
• Smaller values in left subtree
• Larger values in right subtree
Example
• X>Y
• X<Z X

Y Z
Binary Search Trees
Examples

5
10 10
2 45
5 30 5 45
30
2 25 45 2 25 30
10

25

Binary Non-binary
search trees search tree
Optimal binary search trees
e.g. binary search trees for 3, 7, 9, 12;

3 7 7 12

3 12 3 9 9
7

9 9 12 3

7
12
(a) (b) (c) (d)

8 -49
Optimal binary search trees

n identifiers : a1 <a2 <a3 <…< an

Pi, 1≤i≤n : the probability that ai is searched.

Qi, 0≤i≤n : the probability that x is searched where ai <

x < ai+1
n n
∑ P i +∑ Q i =1
i= 1 i= 1

8 -50
10 Identifiers : 4, 5, 8, 10, 11, 12, 14
Internal node : successful search, Pi
5 14
External node : unsuccessful search, Qi

4 8 11 E7

E0 E1 E2 E3 E4 12

E5 E6

The expected cost of a binary tree:


n n
∑ P i∗level( a i )+ ∑ Q i∗( level( E i )−1)
n= 1 n= 0

In above equation first indicates successful search termination at node


Ai at level l, thus cost contribution is given and in second equation
unsuccessful search terminates at external node and thus given cost
contribution is as given. The level of the root : 1

8 -52
The dynamic programming approach

Let C(i, j) denote the cost of an optimal binary search tree containing ai … aj
The cost of the optimal binary search tree with ak as its root :

{ [ ][ ]}
k− 1 n
P k + Q 0 + ∑ ( P i +Q i ) +C ( 1,k−1 ) + Q k + ∑ ( P i +Qi ) +C ( k+1,n )
i= 1 i=k+ 1

ak
P 1 ...P k -1 P k +1 ...P n
Q 0 ...Q k -1 Q k ...Q n

a1...ak-1 ak+1...an

C(1 ,k -1 ) C(k +1 ,n ) 8 -53


Optimal Binary Search Tree (OBST) : Introduction

• Binary Search Trees (BSTs) are used to for search an identifier or a symbol in a
table. There are many practical applications of BST. Sometimes we need to
construct an Optimal BST (OBST) (with weights based on frequency of occurrence)
so that searching an identifier can be done in optimal time. For example any
language compiler or interpreter maintains symbol tables which are searched to
know whether the given identifier is present in the symbol table.

• Example : {a1,a2,a3} = {do,if,while} assume equal weights. Find all


possible BSTs and their costs.

Find OBST if weights are not equal with (


p1, p2, p3) = (0.5,0.1, 0.05) and (q0, q1, q2, q3) =
(0.15, 0.1, 0.05, 0.05)
• For given ‘n’ identifiers and their corresponding weights, there are C(2n, n)/(n+1)
number of different possible trees. We can work out the total cost of each tree and
then select the optimal BST. But it is quite exhaustive even for a moderate value of
‘n’.
while do

do
if

if while

do

if
while

do while
if
aK

a1.....ak-1 ak....aK+1

(i,j)=cost(L)+cost(R)+
Optimal Binary Search Tree (OBST) : Problem Definition
Problem Specifications :
• Let us assume given set of identifiers as {a1,a2,…,an} with a1<a2<, …
,<an.
• Let p(i) be the probability with which we search for ai and let q(i)be
the probability that the identifier x being searched for is such that ai < x <
ai+1.,0 ≤ i ≤ n.
• Further assume that a0 = - ∞ and an+1= +∞.
• Probability of unsuccessful searches
Σ q[i] 0<= i<= n
• Probability of successful searches
Σ p[i] 1<= i<= n
• Σ p[i]+Σ q[i] = 1
1<= i<= n 0<= i<= n
• With this data available now problem is to construct a BST with minimum
cost i.e. OBST.
Dynamic Programming approach to construct OBST :
• To apply DP approach for obtaining OBST, we need to view the construction of
such a tree as the result of sequence of decisions and observe that the principle
of optimality holds.

• A possible approach to this would be to decide which of the ai’s (with weights
pi’s) should be selected as the root node of the tree.

• If we choose ak as the root node from a1, a2, …, an (sorted in non-decreasing


order) , then it is clear that the internal nodes a1, a2, …, ak-1 and external
nodes for classes E0, E1, E2, …, Ek-1 will be in the left sub tree l and the
internal nodes ak+1, ak+2, …, an and external nodes for classes Ek+1, Ek+2,
…, En will be in the right sub tree r. Let root ak be at level 1.

• Cost (l) = Σ pi * level (ai) + Σ qi * (level (Ei) – 1)


1<= i < k 0<= i < k

Cost (r) = Σ pi * level (ai) + Σ qi * (level (Ei) – 1)

k < i <= n k <= i <= n


Dynamic Programming approach to construct OBST :
• Let w(i,j)= Total weight of BST containing
identifiers ai+1, ai+2, …, aj and qi, qi+1, …,
qj i.e. sum of the probabilities of all nodes
(actual+fictitious)

w(i,j) = qi + Σ(ql + pl)


i+1<= l <= j
w(i,j)= pj + qj + w(i,j – 1) … (1)

Cost of expected BST


= p(k)+cost(l)+cost(r)+w(0,k-1)+w(k,n) … (2)
For the OBST, equation (2) must be minimum i.e.
cost(l) & cost(r) must also be minimum,
cost (l) = c(0, k-1)
cost (r) = c(k, n)
cost of the OBST
C(i,j)=min{C(leftst)+C(rtsubt)+w(i,j)}
where i varies from i<k<=j
(ai,ak,aj) aK

ai---ak-1 ak....aj

10,51,47,58,24,36,89

is combination probability of search of


node(actual+false) on left of r(i,j)
left of r(i,j)=c(l)
actual +false nodes

w(i,j)=q(0)+ summation i+1 to j {q(i+1)+p(i+1)}

q(0)=false node
a1,a2

C(0,2)
C(0,0)+w(0,0)+ a1
C(0,1)+w(0,1)+
C(0,2+W(0,2) q0 q1

c(ij)=c(0,4)=do if int while


C(i,j)=min{C(leftst)+C(rtsubt)+w(i,j)}
=min{c(i,k-1)+c(k,j)+w(i,j)
w(i,j)=p(j)+q(j) +w(i ,j-1)
r(i,j)
r(0,4)
r(0,0)
r(0,1)
r(0,2)=r(0,0) or r(0,1) or r(0,2)

i<k<=j
C(i,j)=min{C(leftst)+C(rtsubt)}+w(i,j)
=min{c(i,k-1)+c(k,j)}+w(i,j)
C01=min{c(00)+c(1,1)}w(0,1)
i<k<=j
w(i,j)=p(j)+q(j) +w(i ,j-1) previous

W(0,0)=q(0)
w(11)=q1
w(22)=q2
W(0,1)=p(1)+q(1)+W(0,0)
W(0,2)=p(2)+q(2)+w(0,1)
0,1,2 a1

q0 q1
Dynamic Programming approach to construct OBST :

• k means root must be chosen such that,

p(k)+c(0,k-1)+c(k,n)+w(0,k-1)+w(k,n) is minimum Cost of OBST

c(0,n) = min{c(0,k-1) + c(k,n) + p(k) + w(0,k-1) + w(k,n)}… (3)

1<= k <= n

• We can generalize equation (3) above as,


c(i,j) = min{c(i, k-1) + c(k,j)} + p(k) + w(i, k-1) + w(k, j)}

i< k <= j

c(i, j) = min{c(i, k-1) + c(k, j) + w(i, j)} … (4)

i< k <= j

with the knowledge that,

c(i, i) = 0, w(i, i) = qi & r(i, i) = 0 … (5)


Optimal Binary Search Tree
• Let Tij be a minimum cost tree for subset { ci+1, ci+2, ..., cj}
• Let wij be weight of Tij and it will be given by, P and Q.
• In beginning Tij be empty tree with the root at rij, Calculate rij and
cij in the order of increasing values for 0<=j<=k<=n.
• Let cij be the cost of Tij.
OBST
E.g. Find an OBST using Dynamic Programming for n=4 & keys
(k1,k2,k3,k4)= (do, if, int, while). Given that p(1:4) = (3,3,1,1) &
q(0:4) = (2,3,1,1,1).
Sol:
Step 1: Initialize c(i , j) = 0, r(i , j) = 0 & w(i , j) = q(i), where 0≤ i ≤ 4.
Hence W00= 2, W11= 3, W22= 1, W33 = 1 , W44 = 1

Step 2: W(i , j) = p(j) + q(j) + w(i , j-1)


C(i , j) = min[C(i , a-1) + c(a , j)] + W(i , j) where i ≤ a ≤ j
Where a lies between Ri,j-1 and R(i+1),j
r(i , j) = value of ‘a’ which minimizes C(i , j )

Compute C(i , j) for j – i = 1

Step 3: Compute C(i , j) for j – i = 2

Step 4: Compute C(i , j) for j – i = 3

Step 5: Compute C(i , j) for j – i = 4


Dynamic Programming approach to construct OBST :
Example : Construct OBST for the following instance,
n = 4, (a1, a2, a3, a4) = (do, if, int, while)
(p1, p2, p3, p4) = (3, 3, 1, 1)
(q0, q1, q2, q3, q4) = (2, 3, 1, 1, 1)

OBST a2 3
Ecost = 18
Icost = 14

a1 3 Total Cost = 32
if a3 1 Total Weight = 16

2 3 1
a4 1
do int
1 1

while
Dynamic Programming approach to construct OBST :

i → 0 1 2 3 4
j-i ↓

w00 = 2 (a1, a2, a3, a4) =(do, if, int, while)


0
c00 = 0 (p1, p2, p3, p4) = (3, 3, 1, 1)
r00 = 0 (q0, q1, q2, q3, q4) = (2, 3, 1, 1, 1)

w01 = 8 w11 = 3
1 c01 = 8 c11 = 0
r01 = 1 r11 = 0

w02 = 12 w12 = 7 w22 = 1


2 c02 = 19 c12 = 7 c22 = 0
r02 = 1 r12 = 2 r22 = 0

w03= 14 w13 = 9 w23 = 3 w33 = 1


3 c03 = 25 c13 = 12 c23 = 3 c33 = 0
r03 = 2 r13 = 2 r23 = 3 r33 = 0
w04 = 16 w14 = 11 w24 = 5 w34 = 3 w44 = 1
4 c04 = 32 c14 = 19 c34 = 3
c24 = 8 c44 = 0
r04 = 2 r14 = 2
r24 = 3 r34 = 4 r44 = 0
w00 =2 W11=3 w22 =1 w33 =1 w44 =1
C00=0 C11=0 c22=0 c33=0 c44=0
r00=0 r11=0 r22=0 r33=0 r44=0
W01=8 w12=7 w23=3 w34=3
C01=8 c12=7
r01=1 r12=2 c23= 3 c34=3
r23 =3 r34=4

w02 q0a1a2 w13 a1,a2,a3 w24


C02 c13 a2,a3,a4
r02 r13

w03 w14 a1,a4


w04

w23=p+q2+w11
(a1,c12=min{c11+c22}+w1,2
a2, a3, a4) = (do, if, int, while)
(p1, p2, p3, p4) = (3, 3, 1, 1)
C01=min{c(00)+c(1,1)}+w(0,1)
(q0, q1, q2, q3, q4) = (2, 3, 1, 1, 1)

i<k<=j
Dynamic Programming approach to construct OBST :

c(i, j) w(i, j) + min {c(i, k-1) + c(k, j)} Int. Values Min. Value k
i< k ≤ j

c(0, 1) w(0, 1) + c(0, 0) + c(1,1) 8+0 8 1

c(1, 2) w(1, 2) + c(1, 1) + c(2,2) 7+0 7 2

c(2, 3) w(2, 3) + c(2, 2) + c(3,3) 3+0 3 3

c(3, 4) w(3, 4) + c(3, 3) + c(4,4) 3+0 3 4

c(0, 2) w(0, 2) + mim {c(0, 0) + c(1, 2)} 12 + min{7, 8) 19 1


{c(0, 1) + c(2, 2)} 2

c(1, 3) w(1, 3) + mim {c(1, 1) + c(2, 3)} 9 + min{3, 7) 12 2


{c(1, 2) + c(3, 3)} 3

c(2, 4) w(2, 4) + mim {c(2, 2) + c(3, 4)} 5 + min{3, 3) 8 3


{c(2, 3) + c(4, 24} 4
Dynamic Programming approach to construct OBST

c(i, j) w(i, j) + min {c(i, k-1) + c(k, j)} Int. Values Min. Value k
i< k ≤ j

c(0, 3) w(0, 3) + mim {c(0, 0) + c(1, 3)} 14 + min{12, 11,19) 25 1


{c(0, 1) + c(2, 3)} 2
{c(0, 2) + c(3, 3)} 3

c(1, 4) w(1, 4) + mim {c(1, 1) + c(2, 4)} 11 + min{8, 10,12) 19 2


{c(1, 2) + c(3, 4)} 3
{c(1, 3) + c(4, 4)} 4

c(0, 4) w(0, 4) + mim {c(0, 0) + c(1, 4)} 16 + min{19, 16, 22, 25) 32 1
{c(0, 1) + c(2, 4)} 2
{c(0, 2) + c(3, 4)} 3
{c(0, 3) + c(4, 4)} 4
OBST
0 1 2 3 4
W00 =
0 C00 =
R00 =

W01 = W11 =
1 C01 = C11 =
R01 = R11 =

W02 = W12 = W22 =


2 C02 = C12 = C22 =
R02 = R12 = R22 =

W03 = W13 = W23 = W33 =


3 C03 = C13 = C23 = C33 =
R03 = R13 = R23 = R33 =

W04 = W14 = W24 = W34 = W44 =


4 C04 = C14 = C24 = C34 = C44 =
R04 = R14 = R24 = R34 = R44 =
OBST
0 1 2 3 4
W00 = 2
0 C00 = 0
R00 = 0

W01 = 8 W11 = 3
1 C01 = 8 C11 = 0
R01 = 1 R11 = 0

W02 = 12 W12 = 7 W22 = 1


2 C02 = 19 C12 = 7 C22 = 0
R02 = 1 R12 = 2 R22 = 0

W03 = 14 W13 = 9 W23 = 3 W33 = 1


3 C03 = 25 C13 = 12 C23 = 3 C33 = 0
R03 = 2 R13 = 2 R23 = 3 R33 = 0

W04 = 16 W14 = 11 W24 = 5 W34 = 3 W44 = 1


4 C04 = 32 C14 = 19 C24 = 8 C34 = 3 C44 = 0
R04 = 2 R14 = 2 R24 = 3 R34 = 4 R44 = 0
OBST
To Construct an OBST Tree:

R ij = a R 04 = 2

R ia-1 R a. j R 01 =1 R 24 =3

R00 = 0 R11 = 0 R22 = 0 R34 = 4

R33 = 0 R44 = 0
if

do int

while
OBST
E.g. Find an OBST using Dynamic Programming for N=3 & keys
{a1,a2,a3} = { do , if , while}
Given that p(1:3) = (0.5 , 0.1 , 0.05)
q(0:3) = (0.15 , 0.1 , 0.05 , 0.05)
OBST
0 1 2 3
W00 = 0.15
0 C00 = 0
R00 = 0

W01 = 0.75 W11 = 0.1


1 C01 = 0.75 C11 = 0
R01 = 1 R11 = 0

W02 = 0.9 W12 = 0.25 W22 = 0.05


2 C02 = 1.15 C12 = 0.25 C22 = 0
R02 = 1 R12 = 2 R22 = 0

W03 = 1 W13 = 0.35 W23 = 0.15 W33 = 0.05


3 C03 = 1.5 C13 = 0.5 C23 = 0.15 C33 = 0
R03 = 1 R13 = 2 R23 = 3 R33 = 0
0/1 Knapsack
(Dynamic Programming)
Knapsack Methods
Case 1: Keep objects in knapsack which is having max profit.
Case 2: Fill knapsack slowly from min. W to max.
Case 3: Keeping objects of same type, but in knapsack repetition of
elements is not allowed.
Case 4: Take random fraction of each object.
Case 5: Take profit to weight ratio and keep object in knapsack.
0/1 Knapsack Problem : Problem Definition
Problem Statement : We are given ‘n’ objects and a knapsack or a bag. Object ‘i’ has a weight
wi, 1<=i<=n, and the knapsack has a capacity ‘m’ (maximum weight the knapsack can hold). If an
object xi € {0, 1} is placed into the knapsack then a profit of pixi is earned. The objective is to
obtain a filling of the knapsack that maximizes the total profit earned. Since the knapsack capacity
is ‘m’, total weight of chosen objects should not exceed ‘m’. Profits and weights are positive
numbers. This problem is Subset Selection problem.
Mathematic 0/1 Knapsack Problem :

maximize Σ pixi …….. (1)


1<= i<= n

subject to Σ wixi <= m …….. (2)


1<= i<= n

xi Є {0, 1} , 1<=i<=n ..….…(3)

pi >= 0, wi>=0 .……..(4)


Weights={3,4,6,5}
Total Capacity M= 8
Values(Profits)={2,3,1,4}
Total no of Items=4
i=tot no of object
j=capacity
Profit (i, j) = max [ P ( i-1, j ) , P (i-
1,j - wi ) + Wt
piCapacity
] of bag

Profits Wts 0 1 2 3 4 5 6 7 8
1 2 3 0 0 0 2 2 2 2 2 2
2 3 4 0 0 0 2 3 3 3 5 5
3 4 5 0 0 0 2 3 4 4 5 6
4 1 6 0 0 0 2 3 4 4 5 6

cap7 obj=4+3 p=3+2

cap=8 obj=5(i=3) +3(i=1) p=4+2


cap=8 obj=6(i=4)+
0/1 Knapsack Problem … contd. :
Consider the following instance for 0/1 knapsack problem
n = 6, {w1, w2, w3, w4, w5, w6} = {1, 2, 4, 9, 10, 20}
m = 20 {p1, p2, p3, p4, p5, p6} = {4, 20, 8, 36, 70, 80}

n=5 wt{1,2,5,6,7}
m=11 profit{1,6,18,22,28}
page no 267 Brassard
0/1 Knapsack Problem : Solution by DP strategy

Mathematical Model of 0/1 Knapsack Problem :


maximize Σ pixi …….. (1)
1<= i<= n
subject to Σ wixi <= m …….. (2)
1<= i<= n
xi Є {0, 1} , 1<=i<=n ..….…(3)
pi >= 0, wi>=0 .……..(4)
0/1 Knapsack Problem : Solution by DP strategy (Recursion)
• Let gi(y) denote the value of optimal solution to knap(i+1, n, y), where (i+1) to n are the
objects and y is the capacity of knapsack.
• Clearly g0(m) is the value of an optimal solution to knap(1, n, m)
• The possible decisions for x1 are 0 or 1. From the principle of optimality it follows that ,
g0 (m) = max { g1 (m), (g1(m - w1) + p1) }
if x1 = 0 if x1 = 1
• The above equation can be generalized as,
gi(y) = max { gi+1 (y), (gi+1(y - wi+1) + pi+1) }
if xi+1 = 0 if xi+1 = 1
• This equation can be used to obtain gn-1(y) from gn(y) and can be further used recursively
to obtain optimal solution g0(y) with the knowledge that gn(y) = 0 for all y >=0 and
gn(y) = - ∞ for all y < 0
0/1 Knapsack Problem : Solution by DP strategy (Recursion)
Now we can write down recursive algorithm to solve 0/1 knapsack problem using above equations.
Algorithm 0-1-knapsack-rec( i, j, m)
// global array arr[1:n] contains weights and corresponding profits for // objects 1 to n
{ if (i = j)
{ if m >= 0
return (0)
else return (- ∞ )
}
else return (max((01-knapsack(i+1, j, m)),
(01-knapsack(i+1, j, m-w(i+1))+ p(i+1)))
}
Example : Solve the following instance using algorithm 0-1-knapsack-rec.
n = 3, (w1, w2, w3) = (2, 3, 4) and
m = 6, (p1, p2, p3) = (1, 2, 5)
0/1 Knapsack Problem : Solution by DP strategy (DP)
Algorithm 0-1-knapsack-dp : In this method table of size (1:m, 1:n] is populated either row wise or
column wise. In row wise method objects are considered one at a time starting from 1st object to nth object.
Following Rule is used,
P[i, j] = max ( P[i-1, j], P[i -1, j-w[i]] + p[i] )
P[o, j] = 0 when j ≥ 0 and P[i, j] = - ∞ when j < 0

Example : Solve the following instance using Dynamic Programming.


n = 3, (w1, w2, w3) = (2, 3, 4) and
m = 6, (p1, p2, p3) = (1, 2, 5)

... j →
i↓

Weight Limit 0 1 2 3 4 5 6
1 to n ->

w1 = 2, p1 = 1 0 0 1 1 1 1 1

w2 = 3, p1 = 2 0 0 1 2 2 3 3

w3 = 4, p1 = 5 0 0 1 2 5 5 6
Knapsack problem
Given some items, pack the knapsack to get
the maximum total value. Each item has some
weight and some value. Total weight that we can
carry is no more than some fixed number W.
So we must consider weights of items as well as
their values.

Item # Weight Value


1 1 8
2 3 6
3 5 5
Knapsack problem
There are two versions of the problem:
1. “0-1 knapsack problem” and
2. “Fractional knapsack problem”

1. Items are indivisible; you either take an item or not.


Some special instances can be solved with
dynamic programming

2. Items are divisible: you can take any fraction of an


item. Solved with a greedy algorithm
0-1 Knapsack problem

• Given a knapsack with maximum capacity W,


and a set S consisting of n items
• Each item i has some weight wi and benefit
value bi (all wi and W are integer values)
• Problem: How to pack the knapsack to
achieve maximum total value of packed
items?
0-1 Knapsack problem: a picture

Weight Benefit value


Items
wi bi
2 3
This is a knapsack
3 4
Max weight: W = 20
4 5

5 8
W = 20

9 10
0-1 Knapsack problem

• Problem, in other words, is to find


max ∑ b i subject to ∑ w i ≤ W
i∈T i∈T

• The problem is called a “0-1” problem,


because each item must be entirely
accepted or rejected.
Defining a Subproblem
w1 =2 w2 =4 w3 =5 w4 =3 Weight Benefit
b1 =3 b2 =5 b3 =8 b4 =4 Item wi bi
? #
1 2 3
S4 2 4 5
Max weight: W = 20
For S4: S5
3 5 8
Total weight: 14
Maximum benefit: 20
4 3 4
5 9 10
w1 =2 w2 =4 w3 =5 w5 =9
b1 =3 b2 =5 b3 =8 b5 =10

Solution for S4 is not


For S5: part of the solution for
Total weight: 20 S5!!!
Maximum benefit: 26
Defining a Subproblem (continued)
 As we have seen, the solution for S4 is not part of the
solution for S5
 So our definition of a subproblem is flawed and we need
another one!
 Let’s add another parameter: w, which will represent the
maximum weight for each subset of items
 The subproblem then will be to compute V[k,w], i.e., to
find an optimal solution for Sk = {items labeled 1, 2, .. k}
in a knapsack of size w
Problem
• Given a knapsack with capacity W, and a set of objects i, 1  i  n, such that object i has
value vi and weight wi.
• Find the maximum value of objects which can be fitted in the knapsack.

1 2 3 4 5
Knapsack
W:4 W:5 W:5 W:2 W:7
Capacity: 13
V:4 V:3 V:7 V:3 V:8

1 3 4 1 4 5
W:4 W:5 W:2 W:4 W:2 W:7
V:4 V:7 V:3 V:4 V:3 V:8
0/1 Knapsack: Optimal Substructure
• Let m[k, s] be the maximum value of objects i, for 1 i k, that can be fitted in a knapsack
with capacity s.
m[k, 0] = 0.
m[0, s] = 0.
If k,s  0:

m[k, s] = max(m[k-1, s], m[k-1, s-wk]+vk) if s  wk .


m[k, s] = m[k-1, s] if s < wk .
Recurrence
• Let m[k, s] be the maximum value of objects i, for 1 i k, that can be fitted in a knapsack
with capacity s.

m[k, 0] = 0.
m[0, s] = 0.

If k,s  0:
m[k, s] = max(m[k-1, s], m[k-1, s-wk]+vk) if s  wk .
m[k, s] = m[k-1, s] if s < wk .
Multistage Graph
Principle of Optimality:
Whatever may be the initial decision & initial state, the remaining set of decisions including
the first one should lead to optimal solution.

• It is directed graph G = (V,E) in which the vertices are partitioned into k ≥ 2 disjoint sets
Vi, 1 ≤ i ≤ k.
• if (u,v) is an edge in E, then u € Vi, and v € Vi+1 for some i, 1 ≤ I < k.
• The vertex s is the source, and t the sink.
• Let c(i,j) be the cost of edge (I,j).
• The cost of path from s to t is the sum of the costs of the edges on the path.
• The multistage graph problem is to find a minimum-cost path from s to t.
• Each set Vi defines a stage in the graph, & due to constraints on E, every path
from s to t starts in stage 1, goes to stage 2, then to stage 3, then to stage 4, &
so on, & terminates in stage k.
• E.g. Five stage graph with a minimum cost s to t path indicated by the broken edges.
Multistage Graphs

Figure: Sample Graph and Simulation of the algorithm


Note: The shortest cost journey from the source to a target in a graph with
stages.
G={V,E}
V={S1,S2,....Sn}
E{e1,e2,e3...en}

cost(i,j)=min(cost(i, l)
+min(cost (i+1,l)
Cost Matrix
Solution using Backward Costs
The traveling salesperson
(TSP) problem
e.g. a directed graph :
2
1 2
2
10
4
6 5 9 3
8
7
4 3
4

1 2 3 4

Cost matrix: 1 2 10 5
2 2  9 
3 4 3  4
4 6 8 7 
The multistage graph solution
(1,2,3) 4 (1,2,3,4)
9
6
¡Û (1,2,4) 7 (1,2,4,3)
(1,2)
4
2
3 (1,3,2) ¡Û (1,3,2,4) 6
10
(1) (1,3) 1
2
4 (1,3,4) 8 (1,3,4,2)
5 4

(1,4) 9
8 (1,4,2) (1,4,2,3)
2
7
(1,4,3) 3 (1,4,3,2)

A multistage graph can describe all possible tours of a


directed graph.
Find the shortest path:
(1, 4, 3, 2, 1) 5+7+3+2=17
Multistage Graphs
Forward approach and backward approach:
Note that if the recurrence relations are formulated using the
forward approach then the relations are solved backwards . i.e.,
beginning with the last decision
On the other hand if the relations are formulated using the
backward approach, they are solved forwards.
To solve a problem by using dynamic programming:
Find out the recurrence relations.
Represent the problem by a multistage graph.
Objects Capacity=50kg
O1,O2,O3,O4....On
M1,M2,M3,M4,M5...Mn
N1,N2,N3,N4....Nn

O1+O3, O2+O3

O=O1+O3+O4

You might also like