0% found this document useful (0 votes)
5 views

Dynamic Programming Problem

Introduction to Dynamic Programming Problem

Uploaded by

Infant Gabriel G
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

Dynamic Programming Problem

Introduction to Dynamic Programming Problem

Uploaded by

Infant Gabriel G
Copyright
© © All Rights Reserved
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 12

DYNAMIC PROGRAMMING

PROBLEM

Dr G Infant Gabriel
Assistant Professor of Mathematics,
PG & Research Department of
Mathematics,
Sri Ramakrishna College of Arts &
Science, Coimbatore
Introduction
•Dynamic Programming is a mathematical technique of optimization using
multistage decision process.

•That is, the process in which a sequence of interrelated decisions has to be made.
•It provides a systematic procedure for determining the combination of decisions which
maximize overall effectiveness.

•The dynamic programming technique decomposes the original problem in n-variables in


to n-sub problems (stages) each in one variable.

•The solution is obtained in an orderly manner by starting from one stage to the next and
is completed after the final stage is reached.

•This dynamic programming technique was developed by Richard Bellman in the early 1950.
Need of dynamic programming
• In many situations we observed that the decision making process consists of selecting a combination of
plans from a large number of alternative combinations.

• Before making a decision, it is required that


(a) all the decisions of a combination are specified

(b) the optimal policy can be selected only after all the combinations are evaluated.

Drawbacks
• Lot of computational work and too much time is involved.
• All combinations may not satisfy the limitations and thus may be infeasible.
• The number of combinations is so large.
Bellman's principle of optimality
•It states that, "An optimal policy (set of decisions) has the property
that
whatever be the initial state and initial decisions, the remaining decisions
must constitute an optimal policy for the state resulting from the first
decision".
•It implies that given the initial state of a system, an optimal policy for the
subsequent stages does not depend upon the policy adopted at the proceeding
stages.
•A problem which does not satisfy the principle of optimality can not
be solved by dynamic programming.
Characteristics of dynamic programming
problems
•The problem can be divided in to stages, with a policy decision required at each stage.
•Each stage has a number of states associated with it. The states are various possible conditions
in which the system may find itself at that stage of the problem. The number of states may be
finite or infinite.
•The effect of the policy decision at each stage is to transform the current state into a state
associated with the next stage.
•The current situation (state) of the system at a stage is described by a set of variables, called
state variables. It is defined to reflect the status of the constraints that bind all stages together.
•Given the current state, an optimal policy for the remaining stages is independent of the policy
adopted in previous stages.
Applications of dynamic
programming
• In the production area, this technique has been used for production, scheduling and employment smoothening
problems.

• It has been used to determine the inventory level and for formulating the inventory recording.
• It can be applied for allocating the scarce resources to different alternative uses such as, allocating the
salesmen to different sales districts etc.

• It is used to determine the optimal combination of advertising media (TV, Radio, News papers) and the
frequency of advertising.

• It can be applied in Replacement theory to determine at which age the equipment is to be replaced for optimal
return from the facilities.

• Spare part level determination to guarantee high efficiency utilisation of expensive equipment.
• Other areas: Scheduling methods, Markovian decision models, infinite stage system, probabilistic decision
problems etc.
Dynamic programming
algorithm
• The solution of a multi-stage problem by dynamic programming involves the following steps:
Step 1: Identify the decision variables and specify objective function to be optimized.
Step 2:Decompose the given problem in to a number of smaller sub problems. Identify the state
variables at each stage.
Step 3:Write down a general recursive relationship for computing the optimal policy. Decide either
forward or backward method is to follow to solve the problem.
Step 4:Write the relation giving the optimal decision function for one stage sub-problem and solve it.
Step 5:Solve the optimal decision function for 2-stage, 3-stage,…,(n-1) stage and n-stage problem.
Problem
Divide a positive quantity c into n-parts in such a way that their product is maximum
(or)
Maximize Z=y1*y2*…*yn subject to the constraints y1+ y2+......+yn =c
and yi>= 0, i= 1 ,2,3,.....,n.
Solution
To develop the recursive equation
First we shall develop a recursive equation connecting the optimal decision function for the n-stage
problem with the optimal decision function for the (n - 1) stage sub-problem.
Let yi be
value the isatisfies
which
th
part of yc and
+ y each i may
+...+y =c,bethe
regarded as a stage.
alternative at eachSince y iare
stage mayinfinite.
assume This
any non-negative
means y i is
1 2 n
continuous.
Hence the optimal decisions at each stage are obtained by using usual classical
method (differentiation).
Let fn(c) be the maximum attainable product y1*y2*y3*....yn when c is divided into n parts y1, y2,...,yn .

Thus fn(c) becomes a function of n.

For n = 1 (One stage problem)

If c is divided into one part only,


then y1 = c

f1(c) = c (trivial
case) ...(1)

For n = 2 (Two stage problem)

Here c is divided into two parts y1=


x and y2= c - x such that y1+y2= c
Then

f 2(c)= Max 0 <= x <= c


{y1*y2} = Max 0<=
x<= c
{x (c-x)}

= Max 0 <= x <= c {x f


1
(c-x)} (f
1
(c)=c)...(2)

For n = 3 (Three stage problem)


In general, the recursive equation for the n-stage problem
is
f n(c)= Max 0 <= x <= c {x fn-1 (c-x)} ...(3)

To solve the recursive equation


For n = 2 equation (3) becomes
f

2
(c)= Max 0 <= x <= c {x f 1 (c-x)} f

(c)= Max 0 <= x <= c {x (c-x)}


2

The function x(c - x) will be


maximum if
d /dx (x(c-
x))=0
=>x(- 1) + (c - x) = 0

⇒ -x+c-x = 0
⇒ c-2x = 0 =>
For n = 3 equation (3)
becomes
f 3(c)= Max 0 <= x <= c {x f 2 (c-x)}
2
f 3(c)= Max 0 <= x <= c {x (c-x/2)
}
The function x(c – x/2)2 will be maximum value at x = c/3
2
3
f 3(c)= Max 0 <= x <= c {x (c-x/2)
} = c/3((c - c/3)/2) = (c/3)
The optimal policy is (c/3)3.
Let us assume that the optimal policy for n = m is
m
(c/m,c/m,…,c/m) and f m(c)= (c/m)

For n = m+1 equation (3) becomes


m+1
f m+1(c)=(c/m+1)

Hence by mathematical induction , the optimal policy is


Thank You

You might also like