unit 4
unit 4
Dynamic programming
Dynamic programming approach is similar to divide and conquer in breaking down the
problem into smaller and yet smaller possible sub-problems. But unlike divide and conquer,
these sub-problems are not solved independently. Rather, results of these smaller sub-problems
are remembered and used for similar or overlapping sub-problems.
Mostly, dynamic programming algorithms are used for solving optimization problems. Before
solving the in-hand sub-problem, dynamic algorithm will try to examine the results of the
previously solved sub-problems. The solutions of sub-problems are combined in order to
achieve the best optimal final solution. This paradigm is thus said to be using Bottom-up
approach.
So we can conclude that −
The problem should be able to be divided into smaller overlapping sub-problem.
Final optimum solution can be achieved by using an optimum solution of smaller sub-
problems.
Dynamic algorithms use memorization.
Steps of Dynamic Programming Approach
Dynamic Programming algorithm is designed using the following four steps −
Characterize the structure of an optimal solution.
Recursively define the value of an optimal solution.
Compute the value of an optimal solution, typically in a bottom-up fashion.
Construct an optimal solution from the computed information.
Examples
Fibonacci number series
Knapsack problem
Tower of Hanoi
All pair shortest path by Floyd-Warshall and Bellman Ford
Shortest path by Dijkstra
Project scheduling
Matrix Chain Multiplication
Greedy Algorithms
A greedy algorithm, as the name suggests, always makes the choice that seems to be the best
at that moment. This means that it makes a locally-optimal choice in the hope that this choice
will lead to a globally-optimal solution.
Assume that you have an objective function that needs to be optimized (either maximized or
minimized) at a given point. A Greedy algorithm makes greedy choices at each step to ensure
that the objective function is optimized. The Greedy algorithm has only one shot to compute
the optimal solution so that it never goes back and reverses the decision.
1. It is quite easy to come up with a greedy algorithm (or even multiple greedy
algorithms) for a problem.
2. Analyzing the run time for greedy algorithms will generally be much easier than
for other techniques (like Divide and conquer). For the Divide and conquer technique,
it is not clear whether the technique is fast or slow. This is because at each level of
recursion the size of gets smaller and the number of sub-problems increases.
3. The difficult part is that for greedy algorithms you have to work much harder to
understand correctness issues. Even with the correct algorithm, it is hard to prove
why it is correct. Proving that a greedy algorithm is correct is more of an art than a
science. It involves a lot of creativity.
This is a simple Greedy-algorithm problem. In each iteration, you have to greedily select the
things which will take the minimum amount of time to complete while maintaining two
variables currentTime and numberOfThings. To complete the calculation, you must:
currentTime = 1
numberOfThings = 1
currentTime is 1 + 2 = 3
numberOfThings = 2
currentTime is 3 + 3 = 6
numberOfThings = 3
After the 4th iteration, currentTime is 6 + 4 = 10, which is greater than T. Therefore, the
answer is 3.
Algorithm:
The method which is used to construct optimal prefix code is called Huffman coding.
This algorithm builds a tree in bottom up manner. We can denote this tree by T
Let, |c| be number of leaves
|c| -1 are number of operations required to merge the nodes. Q be the priority queue which
can be used while constructing binary heap.
Algorithm Huffman (c)
{
n= |c|
Q=c
for i<-1 to n-1
do
{
3. Create a new internal node with a frequency equal to the sum of the two nodes
frequencies. Make the first extracted node as its left child and the other extracted node
as its right child. Add this node to the min heap.
4. Repeat steps#2 and #3 until the heap contains only one node. The remaining node is the
root node and the tree is complete.
Let us understand the algorithm with an example:
character Frequency
a 5
b 9
c 12
d 13
e 16
f 45
Step 1. Build a min heap that contains 6 nodes where each node represents root of a tree
with single node.
Step 2 Extract two minimum frequency nodes from min heap. Add a new internal node
with frequency 5 + 9 = 14.
Illustration of step 2
Now min heap contains 5 nodes where 4 nodes are roots of trees with single element each,
and one heap node is root of tree with 3 elements
character Frequency
c 12
d 13
Internal Node 14
e 16
f 45
Step 3: Extract two minimum frequency nodes from heap. Add a new internal node with
frequency 12 + 13 = 25
Illustration of step 3
Now min heap contains 4 nodes where 2 nodes are roots of trees with single element each,
and two heap nodes are root of tree with more than one nodes
character Frequency
Internal Node 14
e 16
Internal Node 25
f 45
Step 4: Extract two minimum frequency nodes. Add a new internal node with frequency 14
+ 16 = 30
Illustration of step 4
Illustration of step 5