Operations Research PDF
Operations Research PDF
Course Description:
Operations Research is a very important area of study, which tracks its roots to business
applications. It combines the three broad disciplines of Mathematics, Computer Science, and
Business Applications. This course will formally develop the ideas of developing, analyzing, and
validating mathematical models for decision problems, and their systematic solution. The course
will involve programming and mathematical analysis.
Course Objectives:
Upon completion of this course, the students will be able to:
- Solve business problems and apply it's applications by using computer programming and
mathematical analysis.
- Develop the ideas of developing, analyzing, and validating mathematical models for decision
problems, and their systematic solution.
- Understand the main concepts of OR.
Unit-I
Introduction
The term operations research was first coined in 1940 by McClosky and Trefthen in a small town
Bowdsey, of the United Kingdom. This new science came into existence in a military context.
During World War II, military management called on scientists from various disciplines and
organized them into teams to assist in solving strategic and tactical problems, relating to air and
land defense of the country. Their mission was to formulate specific proposals and plans for
aiding the Military commands to arrive at decisions on optimal utilization of scarce military
resources and efforts and also to implement the decisions effectively. This new approach to the
systematic and scientific study of the operations of the system was called Operations Research or
operational research. Hence OR can be associated with "an art of winning the war without
actually fighting it."
Advantages of a Model
There are certain significant advantages gained when using a model these are:
(i) Problems under consideration become controllable through a model.
(ii) It provides a logical and systematic approach to the problem.
(iii) It provides the limitations and scope of an activity.
(iv) It helps in finding useful tools that eliminate duplication of methods applied to solve
problems.
(v) It helps in finding solutions for research and improvements in a system.
(vi) It provides an economic description and explanation of either the operation, or the systems
they represent.
Models by Structure
Mathematical models are most abstract in nature. They employ a set of mathematical symbols to
represent the components of the real system. These variables are related together by means of
mathematical equations to describe the behavior of the system. The solution of the problem is
then obtained by applying well-developed mathematical techniques to the model.
BASIC OR CONCEPTS
"OR is the representation of real-world systems by mathematical models together with the use of
quantitative methods (algorithms) for solving such models, with a view to optimizing."
We can also define a mathematical model as consisting of:
Decision variables, which are the unknowns to be determined by the solution to the
model.
Constraints to represent the physical limitations of the system
An objective function
An optimal solution to the model is the identification of a set of variable values which are
feasible (satisfy all the constraints) and which lead to the optimal value of the objective
function.
An optimization model seeks to find values of the decision variables that optimize (maximize or
minimize) an objective function among the set of all values for the decision variables that satisfy
the given constraints.
TERMINOLOGY
Solution: The set of values of decision variables (j = 1, 2, ... n) which satisfy the constraints
is said to constitute solution to meet problem.
Feasible Solution: The set of values of decision variables Xj (j = 1, 2, ... n) which satisfy all
the constraints and non-negativity conditions of an linear programming problem simultaneously
is said to constitute the feasible solution to that problem.
Infeasible Solution: The set of values of decision variables Xj (j = 1, 2, ... n) which do not
satisfy all the constraints and non-negativity conditions of the problem is said to constitute the
infeasible solution to that linear programming problem.
Basic Solution: For a set of m simultaneous equations in n variables (n > m), a solution obtained
by setting (n – m) variables equal to zero and solving for remaining m variables is called a ―basic
feasible solution‖.
The variables which are set to zero are known as non-basic variables and the remaining m
variables which appear in this solution are known as basic variables:
Basic Feasible Solution: A feasible solution to LP problem which is also the basic solution is
called the ―basic feasible solution‖. Basic feasible solutions are of two types;
(a) Degenerate: A basic feasible solution is called degenerate if value of at least one basic
variable is zero.
(b) Non-degenerate: A basic feasible solution is called ‗non-degenerate‘ if all values of m basic
variables are non-zero and positive.
Optimum Basic Feasible Solution: A basic feasible solution which optimizes (maximizes or
minimizes) the objective function value of the given LP problem is called an ‗optimum basic
feasible
solution‘.
Unbounded Solution: A basic feasible solution which optimizes the objective function of the
LP problem indefinitely is called ‗unbounded solution‘.
Introduction of Linear Programming
Formulation of LP Problems
The procedure for mathematical formulation of a LPP consists of the following steps:
Step 1 To write down the decision variables of the problem.
Step 2 To formulate the objective function to be optimized (Maximized or Minimized) as a linear
function of the decision variables.
Step 3 To formulate the other conditions of the problem such as resource limitation, market
constraints, interrelations between variables etc., as linear in equations or equations in terms of
the decision variables.
Step 4 To add the non-negativity constraint from the considerations so that the negative values of
the decision variables do not have any valid physical interpretation.
The objective function, the set of constraint and the non-negative constraint together form a
Linear programming problem.
where constraints may be in the form of inequality < or > or even in the form an equation (=) and
finally satisfy the nonnegative restrictions
x1 ≥, x 2 ≥, x3 ≥,.....xn ≥ 0.
Ex: A manufacturer produces two types of models M1 and M2. Each model of the type M1
requires 4 hrs of grinding and 2 hours of polishing; whereas each model of the type M2 requires
2 hours of grinding and 5 hours of polishing. The manufacturers have 2 grinders and 3 polishers.
Each grinder works 40 hours a week and each polisher works for 60 hours a week. Profit on M1
model is Rs.3.00 and on model M2, is Rs.4.00. Whatever is produced in a week is sold in the
market. How should the manufacturer allocate his production capacity to the two types of
models, so that he may make the maximum profit in a week?
Solution:
GRAPHICAL METHOD:
If there are two variables in an LP problem, it can be solved by graphical method. Let the
two variables be x1 and x2. The variable x1 is represented on x-axis and x2 on y-axis. Due to non-
negativity condition, the variables x1 and x2 can assume positive values and hence the if at all
solution exists for the problem, it lies in the first quadrant. The steps used in graphical method
are summarized as follows:
Step 1: Replace the inequality sign in each constraint by an equal to sign.
Step 2: Represent the first constraint equation by a straight line on the graph. Any point on this
line satisfies the first constraint equation. If the constraint inequality is ‗_‘ type, then the area
(region) below this line satisfies this constraint. If the constraint inequality is of ‗_‘ type, then the
area (region) above the line satisfies this constraint.
Step 3: Repeat step 2 for all given constraints.
Step 4: Shade the common portion of the graph that satisfies all the constraints simultaneously
drawn so far. This shaded area is called the ‗feasible region‘ (or solution space) of the given LP
problem. Any point inside this region is called feasible solution and provides values of x1 and x2
that satisfy all constraints.
Step 5: Find the optimal solution for the given LP problem. The optimal solution may be
determined using the following methods.
Extreme Point Method: In this method, the coordinates of each extreme point are substituted
in the objective function equation, whichever solution has optimum value of Z (maximum or
minimum) is the optimal solution.
Iso-Profit (Cost) Function Line Method: In this method the objective function is represented
by a straight line for arbitrary value of Z. Generally the objective function line is represented by a
dotted line to distinguish it from constraint lines. If the objective is to maximize the value of Z,
then move the objective function line parallel till it touches the farthest extreme point. This
farthest extreme point gives the optimum solution. If objective is to minimize the value of Z, then
move the objective function line parallel until it touches the nearest extreme point. This nearest
extreme point gives the optimal solution.
Solution:
Represent x1, on x-axis and x2 on y-axis. Represent the constraints on x and y axis to an
appropriate scale as follows:
Replace the inequality sign of the first constraint by an equality sign, the first constraint
becomes
x1 + x2 = 450
This can be represented by a straight line x
If x1 = 0, then x2 = 450
If x2 = 0, then x1 = 450
The straight line of the first constraint equation passes through coordinates (0, 450) and
(450, 0) as shown in Fig.
Any point lying on this line satisfies the constraint equation. Since the constraint is inequality
of ‗_‘ type, to satisfy this constraint inequality the solution must lie towards left of the line.
Hence mark arrows at the ends of this line to indicate to which side the solution lies.
Replace the inequality sign of the second constraint by an equality sign, then the constraint
equation is
2 x1 + x2 = 600
This constraint equation can be represented by straight line
If x1 = 0 then x2 = 600
If x2 = 0 then x1 = 300
The line passes through co-ordinates (0, 600) and (300,0) as shown in Fig. Any point lying on
this straight line satisfies the constraint equation. Since the constraint is inequality of ‗_‘ type,
the solution should lie towards left of the line to satisfy this constraint inequality. The shaded
area shown in figure satisfies both the constraints as well as non-negative condition. This shaded
area is called the ‗solution space or the region of feasible solutions‘.
To Find Optimal Solution
Extreme Point’s Method: Name the corners or extreme points of the solution space. The
solution space is OABCO. Find the coordinates of each extreme point, substitute them in
objective function equation and find the value of Z as below:
Extreme point ‗C‘ gives the maximum value of Z. Hence, the solution is x1 = 0, x2 = 450 and
maximum Z = 1800.
Unbounded Solution:
Some of the LP problem may not have a finite solution. The values of one or more decision
variables
and the value of the objective function are permitted to increase infinitely without violating the
feasibility
condition, then the solution is said to be unbounded. It is to be noted that the solution may be
unbounded
for maximization type of objective function. This is because in minimization type of objective
function,
the lower boundary is formed by non-negative condition for decision variables.
Solution: x1 – x2 = 1
If x1 = 0, x2 = –1
If x2 = 0, x1 = 1
x1 – x2 = 1 passes through (0, –1) (1, 0)
x1 + x2 = 3
If x1 = 0, x2 = 3
If x2 = 0, x1 = 3
x1 + x2 = 3 passes through (0, 3) (3, 0)
4.5x1 + 3 x2≥ 3
If x1 = 0, x2 = 3
If x2 = 0, x2 = 2
4.5x1 + 3 x2 ≥ 3 passes through (0, 3) (2, 0)
The solution space is in Fig.
Assume Z = 6 then 6 = 3 x1 + 2 x2 passes through (0, 3) (2, 0).
Move the objective function line till it touches the farthest extreme point of solution space. Since
there is no closing boundary for the solution space, the dotted line can be moved to infinity. That
is Z will be maximum at infinite values of x1 and x2. Hence the solution is unbounded.
Note: In an unbounded solution, it is not necessary that all the variables can be made arbitrarily
large as Z approaches infinity. In the above problem if the second constraint is replaced by x1≤ 2,
then, only x1 can approach infinity and x1 cannot be more than two.
Infeasible Solution:
Infeasibility is a condition when constraints are inconsistent (mutually exclusive) i.e., no value
of the variable satisfy all of the constraints simultaneously. There is not unique (single) feasible
region. It should be noted that infeasibility depends solely on constraints and has nothing to do
with the objective function.
Ex:
Maximize Z = 3 x1 – 2 x2
Subject to x1 + x2 ≥ 1
2 x1 + 2 x2≤ 4
x1, x2 ≥ 0
Solution: x1 + x2 = 1 passes through (0, 1) (1, 0)
2 x1 + 2 x2 = 4 passes through (0, 2) (2, 0)
To satisfy the first constraint the solution must lie to the left of line AB. To satisfy the second
constraint the solution must lie to the right of line CD. There is no point (x1, x2) which satisfies
both the constraints simultaneously. Hence, the problem has no solution because the constraints
are inconsistent.
Note: The geometric method of solving linear programming problems presented before. The
graphical method is useful only for problems involving two decision variables and relatively few
problem constraints.
What happens when we need more decision variables and more problem constraints?
We use an algebraic method called the simplex method, which was developed by George B.
DANTZIG (1914-2005) in 1947 while on assignment with the U.S. Department of the air force.
Simplex Method
Most real-world linear programming problems have more than two variables and thus are too
complex for graphical solution. A procedure called the simplex method may be used to find the
optimal solution to multivariable problems. The simplex method is actually an algorithm (or a set
of instructions) with which we examine corner points in a methodical fashion until we arrive at
the best solution- highest profit or lowest cost. Computer programs and spreadsheets are
available to handle the simplex calculations for you. But you need to know what is involved
behind the scenes in order to best understand their valuable outputs.
:
To find the column, locate the most negative entry to the left of the vertical line (here this is −4).
To find the pivot row, divide each entry in the constant column by the entry in the corresponding
in the pivot column. In this case, we‘ll get 4/1 as the ratio for the first row and 5/1 for the ratio in
the second row. The pivot row is the row corresponding to the smallest ratio, in this case 4. So
our pivot element is the in the second column, first row, hence is 1.
Now we perform the following row operations to get convert the pivot column to a unit column
To improve this solution, we determine that x2 is the entering variable, because -202 is
the smallest entry in the bottom row.
Since all Z coefficients and an artificial variable appears in the objective row at positive level,
the solution of given LPP does not possess any feasible solution.
If the minimum value of r > 0, and at least one artificial variable appears in the basis at a positive
level, then the given problem has no feasible solution and the procedure terminates.
If the minimum value of r = 0, and no artificial variable appears in the basis, then a basic feasible
solution to the given problem is obtained. The artificial column (s) are deleted for phase II
computations. If the minimum value of r = 0 and one or more artificial variables appear in the
basis at zero level, then a feasible solution to the original problem is obtained. However, we must
take care of this artificial variable and see that it never becomes positive during phase II
computations. Zero cost coefficient is assigned to this artificial variable and it is retained in the
initial table of phase II. If this variable remains in the basis at zero level in all phase II
computations, there is no problem. However, the problem arises if it becomes positive in some
iteration. In such a case, a slightly different approach is adopted in selecting the outgoing
variable. The lowest non-negative replacement ratio criterion is not adopted to find the outgoing
variable. Artificial variable (or one of the artificial variables if there are more than one) is
selected as the outgoing variable. The simplex method can then be applied as usual to obtain the
optimal basic feasible solution to the given L.P. problem.
PHASE II
Use the optimum basic feasible solution of phase I as a starting solution for the original LPP.
Assign the actual costs to the variable in the objective function and a zero cost to every artificial
variable in the basis at zero level. Delete the artificial variable column from the table which is
eliminated from the basis in phase I. Apply simplex method to the modified simplex table
obtained at the end of phase I till an optimum basic feasible is obtained or till there is an
indication of unbounded solution.
REMARKS
1. In phase I, the iterations are stopped as soon as the value of the new (artificial) objective
function becomes zero because this is its minimum value. There is no need to continue
till the optimality is reached if this value becomes zero earlier than that.
2. Note that the new objective function is always of minimization type regardless of
whether the original problem is of maximization or minimization type.
EX:
Consider the following linear programming model and solve it using the two-phase method.
Phase 1
The model for phase 1 with its revised objective function is shown below. The corresponding
initial table is presented in Table.
objective function:
Min r = A1+A2
= 160 -7x1 – 14x2- 18x3+S1+ S2
subject to
4x1 +8x2 + 6x3 – S1 + A1 = 64
3x1 +6x2 + 12x3 – S2 + A2 = 64
X1, X2, X3, S1, S2, A1, A2≥ 0
Initial Table of Phase 1
The optimal results are presented by x1=0, x2 =3.2=6/5, x3 = 6.4 and min z=153.6
UNIT – II TRANSPORTATION PROBLEM :
Introduction:
The Transportation problem is one of the sub classes of L.P.Ps in which the objective is to
transport various quantities of a single homogeneous commodity, that are initially stored at
various origins to different destinations in such a way that the total transport cost is minimum.
To Achieve this objective we must know the amount and location of available supplies and the
quantities demanded, In addition , we must know that costs that result from transporting one unit
of commodity from various origins to various destinations.
Then the problem is determined the transportation schedule so as to minimize the total
transportation cost satisfying supply and demand constraints.
Mathematically, the problem may be stated as a LPP as follows.
, j=1,2,…n
A necessary and sufficient condition for existence of a feasible solution to the general
transportation problem is
=0
The number of basic variables of the general transportation problem at any stage of feasible
solution must be m+n-1.
NOTE:
1. When the total demand is equal to total supply then the transportation table is said to be
balanced, otherwise unbalanced.
2. The allocated cells in the transportation table will be called occupied cells and empty
cells are called non-occupied cells.
Destination supply
Demand
Step1: Select the cell having lowest unit cost in the entire table and allocate the minimum of
supply or demand values in that cell.
Step2: Then eliminate the row or column in which supply or demand is exhausted. If both the
supply and demand values are same, either of the row or column can be eliminated.
In case, the smallest unit cost is not unique, then select the cell where maximum allocation
can be made.
Step3: Repeat the process with next lowest unit cost and continue until the entire available
supply at various sources and demand at various destinations is satisfied.
This method also takes costs into account in allocation. Five steps are involved in applying this
heuristic:
Step 1: Determine the difference between the lowest two cells in all rows and columns,
including dummies.
Step 2: Identify the row or column with the largest difference. Ties may be broken arbitrarily.
Step 3: Allocate as much as possible to the lowest-cost cell in the row or column with the
highest difference. If two or more differences are equal, allocate as much as possible to the
lowest-cost cell in these rows or columns.
Step 4: Stop the process if all row and column requirements are met. If not, go to the next step.
Step 5: Recalculate the differences between the two lowest cells remaining in all rows and
columns. Any row and column with zero supply or demand should not be used in calculating
further differences. Then go to Step 2.
The Vogel's approximation method (VAM) usually produces an optimal or near- optimal starting
solution. One study found that VAM yields an optimum solution in 80 percent of the sample
problems tested.
Assignment problem
Assignment problem is a particular class of transportation linear programming problems
Supplies and demands will be integers (often 1)
Traveling salesman problem is a special type of assignment problem
Objectives
To structure and formulate a basic assignment problem
To demonstrate the formulation and solution with a numerical example
To formulate and solve traveling salesman problem as an assignment problem
Structure of Assignment Problem
Assignment problem is a special type of transportation problem in which
Number of supply and demand nodes are equal.
Supply from every supply node is one.
Every demand node has a demand for one.
Solution is required to be all integers.
Goal of an general assignment problem: Find an optimal assignment of machines (laborers) to
jobs without assigning an agent more than once and ensuring that all jobs are completed
The objective might be to minimize the total time to complete a set of jobs, or to
maximize skill ratings, maximize the total satisfaction of the group or to minimize
the cost of the assignments
This is subjected to the following requirements:
Each machine is assigned no more than one job.
Each job is assigned to exactly one machine.
Formulation of Assignment Problem
Consider m laborers to whom n tasks are assigned
No laborer can either sit idle or do more than one task
Every pair of person and assigned work has a rating
Rating may be cost, satisfaction, penalty involved or time taken to finish the job
N2 such combinations of persons and jobs assigned
Optimization problem: Find such job-man combinations that optimize the sum of ratings
among all.
Let xij be the decision variable
The objective function is
m n
Minimize c
i 1 j 1
ij xij
Since each task is assigned to exactly one laborer and each laborer is
assigned only one job, the constraints are
n
x
i 1
ij 1 for j 1, 2,...n
n
x
j 1
ij 1 for i 1, 2,...m
xij 0 or 1
Due to the special structure of the assignment problem, the solution can be found out
using a more convenient method called Hungarian method.
Unit – III Replacement
Unit –IV Game Theory
Introduction
Game theory is a formal methodology and a set of techniques to study the interaction of rational agents in
strategic settings. ‗Rational‘ here means the standard thing in economics: maximizing over well-defined
objectives; ‗strategic‘ means that agents care not only about their own actions, but also about the actions
taken by other agents. Note that decision theory — which you should have seen at least a bit of last term
— is the study of how an individual makes decisions in non-strategic settings; hence game theory is
sometimes also referred to as multi-person decision theory. The common terminology for the field
comes from its putative applications to games such as poker, chess, etc. However, the applications we are
usually interested in have little directly to do with such games. In particular, these are what we call ―zero-
sum‖ games in the sense that one player‘s loss is another player‘s gain; they are games of pure conflict. In
economic applications, there is typically a mixture of conflict and cooperation motives.
What is a Game?
A game is a formal description of a strategic situation.
Game theory
Game theory is the formal study of decision-making where several players must make
choices that potentially affect the interests of the other players.
Elements of a Game
The players.
The strategies of each player.
The consequences (payoffs) for each player for every possible profile of strategy choices
of all players.
When analyzing any game, we make the following assumptions about both players:
TYPES OF GAMES
1. Two-person games and n-person games.
In two person games, the players may have many possible choices open to them for each play of
the game but the number of player‘s remains only two. Hence it is called a two person game. In
case of more than two persons, the game is generally called n-person game.
2. Zero sum game.
A zero sum game is one in which the sum of the payment to all the competitors is zero for every
possible outcome of the game in a game if the sum of the points won equals the sum of the points
lost.
SADDLE POINT
A saddle point is a position in payoff matrix where the maximum of row minima coincides with
the minimum of column maxima. The payoff at the saddle point is called the value of the game.
Strategy:
A Strategy is a set of best choices for a player for an entire game.
Pure Strategy:
A pure strategy defines a specific move or action that a player will follow in every possible
attainable situation in a game.
Mixed strategy:
A mixed strategy is an active randomization, with given probabilities that determines the
player‘s decision. As a special case, a mixed strategy can be the deterministic choice of one of
the given pure strategies.
Payoff :
The payoff or outcome is the state of the game at its conclusion. In games like chess the payoff is
win or loss.
Payoff matrix
Suppose the player A has ‗m‘ activities and the player B has ‗n‘ activities. Then a payoff matrix
can be formed by adopting the following rules
1. Row designations for each matrix are the activities available to player A
2. Column designations for each matrix are the activities available to player B
3. Cell entry Vij is the payment to player A in A‘s payoff matrix when A chooses the
activity i and B chooses the activity j.
4. With a zero-sum, two-person game, the cell entry in the player B‘s payoff matrix will be
negative of the corresponding cell entry Vij in the player A‘s payoff matrix so that sum of
payoff matrices for player A and player B is ultimately zero.
Value of the game
Value of the game is the maximum guaranteed game to player A (maximizing player) if both
the players uses their best strategies. It is generally denoted by ‗V‘ and it is unique.
This principle is used for the selection of optimal strategies by two players. Consider two players
A and B. A is a player who wishes to maximize his gain while player B wishes to minimize his
losses. Since A would like to maximize his minimum gain, we obtain for player A, the value
called maximize value and the corresponding strategy is called maximize strategy. On the other
hand, since player B wishes to minimize his losses, a value called the Minimax value which is
the minimum of the maximum losses is found. The corresponding strategy is called the minimax
strategy. When these two are equal, the corresponding strategies are called optimal strategy and
the game is said to have a saddle point. The value of the game is given by the saddle point. The
selection of maximin and minimax strategies by A and B is based upon the so-called maximin –
minimax principle which guarantees the best of the worst results.
Dominance property
Sometimes it is observed that one of the pure strategies of either player is always inferior to at
least one of the remaining ones. The superior strategies are said to dominate the inferior ones. In
such cases of dominance, we reduce the size of the payoff matrix by deleting those strategies
which are dominated by others. The general rules for dominance are:
I. If all the elements of a row are less than or equal to the corresponding elements of any
other row then row is dominated by the row.
II. If all the elements of a column are greater than or equal to the corresponding elements of
other column then column, then column is dominated by the column.
III. Dominated rows and columns may be deleted to reduce the size of the pay-off matrix as
the optimal strategies will remain unaffected.
IV. If some linear combinations of some rows dominates row, then the row will be deleted.
Similar arguments follow for column.
UNIT – V WAITING LINES
Queueing theory is the mathematical study of waiting lines, or queues.
Significance:
We come in contact with waiting line systems, or queuing systems, everywhere and everyday.
May it be waiting in line for your morning coffee, opening up your email account to see your list
of new messages, or stopping at a red light in a traffic intersection, you are participating in a
waitinglinesystem.
As a business, waiting line systems are especially important to the operations management of the
organization.
Each specific situation will be different, but waiting line systems are essentially composed of
four major elements:
The easiest waiting line model involves a single-server, single-line, single-phase system.
The following assumptions are made when we model this environment:
1. The customers are patient (no balking, reneging, or jockeying) and come from a population
that can be considered infinite.
2. Customer arrivals are described by a Poisson distribution with a mean arrival rate of ƛ_
(lambda). This means that the time between successive customer arrivals follows an exponential
distribution with an average of 1/ƛ_.
3. The customer service rate is described by a Poisson distribution with a mean service rate of µ_
(mu). This means that the service time for one customer follow an exponential distribution with
an average of 1/ƛ_.
4. The waiting line priority rule used is first-come, first-served.
Using these assumptions, we can calculate the operating characteristics of a waiting line system
using the following formulas:
MULTISERVER WAITING LINE MODEL
In the single-line, multi server, single-phase model, customers form a single line and are served
by the first server available. The model assumes that there are s identical servers, the service time
distribution for each server is exponential, and the mean service time is 1/µ. Using these
assumptions, we can describe the operating characteristics with the following formulas:
Distribution Of Arrivals:
When describing a waiting system, we need to define the manner in which customers or the
waiting units are arranged for service. Waiting line formulas generally require an arrival rate, or
the number of units per period (such as an average of one every six minutes). A constant arrival
distribution is periodic, with exactly the same time between successive arrivals. In productive
systems, the only arrivals that truly approach a constant interval period are those subject to
machine control. Much more common are variable (random) arrival distributions. In observing
arrivals at a service facility, we can look at them from two viewpoints: First, we can analyze the
time between successive arrivals to see if the times follow some statistical distribution. Usually
we assume that the time between arrivals is exponentially distributed. Second, we can set some
time length (T ) and try to determine how many arrivals might enter the system within T. We
typically assume that the number of arrivals per time unit is Poisson distributed.
Exponential Distribution
In the first case, when arrivals at a service facility occur in a purely random fashion, a plot of the
inter arrival times yields an exponential distribution
The probability function is
f (t) = λe−λt
where λ is the mean number of arrivals per time period.
E X H I B I T TN6 . 3
Poisson Distribution
In the second case, where one is interested in the number of arrivals during some time period T,
the distribution obtained by finding the probability of exactly n arrivals during T. If the arrival
process is random, the distribution is the Poisson, and the formula is
UNIT – VI INVENTORY
Unit – VII
Dynamic Programming:
Many decision making problems involve a process that takes place in several stages in such a
way that at each stage, the process is dependent on the strategy chosen. Such types of problems
are called Dynamic Programming Problems. Thus dynamic programming is concerned with the
theory of multi stage decision process.
Principle of Optimality
It may be interesting to note that the concept of dynamic programming is largely based upon the
principle of optimality due to Bellman, viz.,
―An optimal policy has the property that whatever the initial state and initial decision are, the
remaining decisions must continue an optimal policy with regard to the state resulting from the
first decision‖
The principle of optimality implies that given the initial state of a system, an optimal policy for
the subsequent stage does not depend upon the policy adopted at the preceding stages. That is,
the effect of a current policy decision on any of the policy decisions of the preceding stages need
not be taken into account at all. It is usually referred to as the Markovian property of dynamic
programming.
UNIT – VIII SIMULATION