DAA Unit 5
DAA Unit 5
Algorithms
(CSC-314)
B.Sc. CSIT
Unit-5: Dynamic Programming
Introduction:
●
Dynamic programming is an optimization method which was developed by Richard
Bellman in 1950.
●
Dynamic programming is used to solve the multistage optimization problem in which
dynamic means reference to time and programming means planning or tabulation.
●
Dynamic programming approach consists of three steps for solving a problem that is as
follows:
– The given problem is divided into subproblems as same as in divide and conquer
rule. However dynamic programming is used when the subproblems are not
independent of each other but they are interrelated. i.e. they are also called as
overlapping problems.
– To avoid this type of recomputation of overlapping subproblem a table is created in
which whenever a subproblem is solved, then its solution will be stored in the table
so that in future its solution can be reused.
– The solution of the subproblem is combined in a bottom of manner to obtain the
optimal solution of a given problem.
●
Let’s take the example of the Fibonacci numbers. As we
all know, Fibonacci numbers are a series of numbers in
which each number is the sum of the two preceding
numbers. The first few Fibonacci numbers are 0, 1, 1, 2,
3, 5, and 8, and they continue on from there.
●
Let’s take the example of the Fibonacci numbers. As we
all know, Fibonacci numbers are a series of numbers in
which each number is the sum of the two preceding
numbers. The first few Fibonacci numbers are 0, 1, 1, 2,
3, 5, and 8, and they continue on from there.
●
As we can clearly see here, to solve the overall problem (i.e.
Fib(n)), we broke it down into two smaller subproblems (which
are Fib(n-1) and Fib(n-2)). This shows that we can use DP to
solve this problem.
●
If we write simple recursive solution for Fibonacci Numbers, we
get exponential time complexity and if we optimize it by storing
solutions of sub-problems, time complexity reduces to linear.
●
We can clearly see the overlapping sub-problem pattern here, as
fib(2) has been evaluated twice and fib(1) has been evaluated three
times.
●
The optimal cost is 1344. and the optimal substructure is
(M1M2M3M4M5)
(M1M2) (M3M4M5)
(M3M4) (M5)
●
Example: We are given the sequence {3,4,5,2 and 3}. The matrices
M1,M2,M3,M4 have size of 3*4, 4*5, 5*2, 2*3. Compute the optimal
sequence for the computation of multiplication operation.
●
●
In 0/1 Knapsack Problem,
Statement: A thief has a bag or knapsack that can contain maximum weight W of his
loot. There are n items and the weight of ith item is Wi and it worth Vi . An amount of
item can be put into the bag is 0 or 1 i.e. xi is 0 or 1. Here the objective is to collect the
items that maximize the total profit earned.
●
Consider-
– Knapsack weight capacity = w
– Number of items each having some weight and value = n
– 0/1 knapsack problem is solved using dynamic programming in the following steps-
●
Find the optimal solution for the 0/1 knapsack problem making use of dynamic programming approach.
Consider-
n=4
w = 5 kg
Solution-
Given-
●
T(3,1), T(3,2), T(3,3), T(3,4), T(3,5)
●
T(4,1), T(4,2), T(4,3), T(4,4), T(4,5)
●
Analysis:
– For run time analysis examining the above algorithm the overall run time of the algorithm is O(n w).
●
Example
●
Let the problem instance be with 7 items where v[ ] = {2,3,3,4,4,5,7} and w[] = {3,5,7,4,3,9,2} and W = 9.
Find the maximum profit earned by using 0/1 knapsack problem.
●
Consider the problem having weights and profits are:
Weights: {3, 4, 6, 5}
Profits: {2, 3, 1, 4}
The weight of the knapsack is 8 kg and The number of items is 4. Find the maximum profit
earned by using 0/1 knapsack problem.
●
The longest common subsequence (LCS) is defined as the longest subsequence
that is common to all the given sequences, provided that the elements of the
subsequence are not required to occupy consecutive positions within the original
sequences.
●
If S1 and S2 are the two given sequences then,
– Z is the common subsequence of S1 and S2
– if Z is a subsequence of both S1 and S2.
– Furthermore, Z must be a strictly increasing sequence of the indices of both
S1 and S2.
●
In a strictly increasing sequence, the indices of the elements chosen from the
original sequences must be in ascending order in Z.
●
If
– S1 = {B, C, D, A, A, C, D}
●
Then, {A, D, B} cannot be a subsequence of S1 as the order of the elements is
not the same (ie. not strictly increasing sequence).
●
Let us understand LCS with an example.
●
If
– S1 = {B, C, D, A, A, C, D}
– S2 = {A, C, D, B, A, C}
●
Then, common subsequences are {B, C}, {C, D, A, C}, {D, A, C}, {A, A, C}, {A, C},
{C, D}, ...
●
Among these subsequences, {C, D, A, C} is the longest common subsequence.
We are going to find this longest common subsequence using dynamic
programming.
●
Let us take two sequences:
●
The following steps are followed for finding the longest common subsequence.
●
Step 1: Create a table of dimension n+1*m+1 where n and m are the lengths of X
and Y respectively. The first row and the first column are filled with zeros.
●
Thus, the longest common subsequence is (C,A).
The Floyd Warshall Algorithm is for solving the All Pairs Shortest
Path problem.
●
The problem is to find shortest distances between every pair of
vertices in a given edge weighted directed Graph.
●
– Each cell A[i][j] is filled with the distance from the ith vertex to the
jth vertex. If there is no path from ith vertex to jth vertex, the cell is
left as infinity.
For example: For A1[2, 4], the direct distance from vertex 2 to 4 is 4
and the sum of the distance from vertex 2 to 4 through vertex (ie. from
vertex 2 to 1 and from vertex 1 to 4) is 7. Since 4 < 7, A0[2, 4] is filled
with 4.
Concepts of Memoization:
●
In computing, memoization or memoisation is an optimization
technique used primarily to speed up computer programs by
storing the results of expensive function calls and returning the
cached result when the same inputs occur again.
●
Most of the Dynamic Programming problems are solved in two
ways:
● Tabulation: Bottom Up
● Memoization: Top Down