CH-4 System Analysis & Optimization
CH-4 System Analysis & Optimization
39
3. Linear and Non Linear Systems
A 1inear system is one in which the output is a constant ratio of the input. In a
linear system the output due to a combination of inputs is equal to the sum of the
outputs from each of the inputs individually, i.e. the principle of superposition is valid.
For example, a system (watershed) in which the input x (rainfall) and the output y
(runoff) are related by y = mx, in which m is a constant, is a linear system. The unit
hydrograph in hydrology is a linear system (as the hydrograph ordinate of the direct
runoff hydrograph is proportional to the rainfall excess), On the other hand, a system
in which the input x and the output y are related by the linear equation y = mx + c, in
which m and c are constants, is not a linear system (why?). A nonlinear system is one
in which the input-output relation is such that the principle of superposition is not
valid. In reality, a watershed is a nonlinear system, as the runoff from the watershed
due a storm is a nonlinear function of the (storm) rainfall over its area.
40
example, a homogeneous isotropic aquifer is analyzed as a lumped parameter system.
Instead, if the spatial variation of the transmissivity in modelling a water table
aquifer is to be taken into account, the aquifer has to be modelled as a distributed
parameter system.
8. Stable Systems
A stable system is one in which the output is bounded if the input is bounded.
Virtually all systems in hydrology and water resources are stable systems.
41
of mathematical planning and design techniques, which includes some formal
optimization procedure. When scarce resources must be used effectively, systems
analysis techniques stand particularly promising (for example, in optimal crop water
allocation to several competing crops, under conditions of limited water supply). It
must be clearly understood that systems analysis is not merely an exercise in
mathematical modelling but spans much farther into processes such as design and
decision. The techniques may use both descriptive as well as prescriptive models. The
descriptive models deal with the way the system works, whereas the prescriptive ones
are aimed at deciding how the system should be operated to best achieve the specified
objectives.
Example Problems
Prediction: In surface water hydrology, the problem is to predict the storm runoff
(output), knowing the rainfall excess (input) and the unit hydrograph (system). In
ground water hydrology, the problem to determine the response (output) of a given
aquifer (system), for given rainfall and irrigation application (input). In a reservoir
(system) the problem is to determine irrigation allocations (output) for given inflow and
storage (input), based on known or given operating policy.
42
hydrograph. In ground water hydrology, the problem is to determine the aquifer
parameters (system), given the aquifer response (output) for known rainfall and
irrigation application (input).
In a reservoir system, the problem is to determine the reservoir release policy (system
operation) for a specified objective (output) for given inflows a (input).
Synthesis: The problem of synthesis is even more complex than the inverse problem
mentioned earlier. Here no record of input and output are available. An example is the
derivation of Snyder’s synthetic unit hydrograph using watershed characteristics to
convert known values of rainfall excess to runoff.
1. Linear Programming
The objective function and the constraints are all linear. It is probably the single-most
applied optimization technique all over the world. In integer programming, which is a
variant of linear programming, the decision variables take on integer values. In mixed
integer programming, only some of the variables are integers.
2. Nonlinear Programming
The objective function and/or (any of) the constraints involve nonlinear terms. General
solution procedures do not exist. Special purpose solutions, such as quadratic
programming, are available for limited applications. However, linear programming may
43
still be used in some engineering applications, if a nonlinear function can be either
transformed to a linear function, or approximated by piece-wise linear segments.
3. Dynamic Programming
Offers a solution procedure for linear or nonlinear problems, in which multistage
decision-making is involved. The choice of technique for a given problem depends on
the configuration of the system being analyzed, the nature of the objective function
and the constraints, the availability and reliability of data, and the depth of detail
needed in the investigation. Linear programming (LP) and dynamic programming (DP)
are the most common mathematical programming models used in water resources
systems analysis. Simulation, by itself, or in combination with LP, DP, or both LP and
DP is used to analyze complex water resources systems.
Maximize f(X)
44
Subjected to
gj(X)≤O , j=l,2....m
Where, X is a vector of decision variables, X = [x1, x2 ... xn.]
In this general problem there are n decision variables (viz. x1, x2, x3…xj,) and m
constraints. The complexity of the problem varies depending on the nature of the
function f(X), the constraint functions gj(X) and the number of variables and
constraints.
It may be noted that by repeatedly simulating the system with various sets of inputs it
is possible to obtain near-optimal solutions.
45
detailed modelling and analysis. Systems analysts find this tool extremely useful as a
screening model for very large systems, and as a planning model to determine the
design and operating parameters for a detailed operational study of a given system.
A. Graphical Method:
Linear programming (LP) is a scheme of solving an optimization problem in which both
the objective function and the constraints are linear functions of decision variables.
There are several ways of expressing a linear programming formulation, which lend
themselves to solutions, with appropriate modifications to the original problem.
We shall illustrate here maximization problem in LP in its classical form first, and
discuss variations later in the section.
46
Where the variables xn+1, xn+2 ... xn+m are called slack variables. The objective
function is written including the slack variables with coefficients cn+1, cn+2
…cn+m=0
For example, in the simplex method (an iterative method, discussed later in the
section), the starting solution is chosen to be the one in which the decision
variables x1, x2… xn, are assumed zero, so that the slack variable in each
equality constraint equals the right hand side of the equation, i.e., xn+1 = b1,
xn+2 = b2, ..., Xn+m = bm, in the starting solution in the simplex method.
Obviously, the objective function value for this starting solution is z = 0.
Iterations are performed in the simplex method on this starting solution for
better values of the objective function till optimality is reached.
Example 4.1
Two crops are grown on a land of 200 ha. The cost of raising crop 1 is 3 unit/ha, while
for crop 2 it is 1 unit/ha. The benefit from crop 1 is 5 unit/ha and from crop 2, it is 2
47
unit/ha. A total of 300 units of money is available for raising both crops. What should
be the cropping plan (how much area for crop 1 and how much for crop 2) in order to
maximize the total net benefits?
Solution:
The net benefit of raising crop 1 = 5 - 3 = 2 unit/ha
The net benefit of raising crop 2 = 2 - 1 = 1 unit/ha
Let x1 be the area of crop 1 in hectares and x2 be that of crop 2, and z, the’ total net
benefit.
Then the net benefit of raising both crops is 2x 1 + x2. However, there are two
constraints. One limits the total cost of raising the two crops to 300, and the other
limits the total area of the two crops to 200 ha. These two are the resource
constraints. Thus the complete formulation of the problem is:
Maximize z = 2x1+ x2 4.1
Subject to
3x1 + x2 300
x1 + x2 200
x 1, x 2 0 4.2
Equation (4.1) is the objective function and Eqs (4.2) are the constraints. The non-
negative constraints for x1 and x2 indicate that neither x1 nor x2 can physically be
negative (area cannot be negative).
First, the feasibility region for the constraint set should be mapped. To do this, plot
the lines 3x1+x2 = 300 and x1+x2 = 200, along with x1= 0 and x2 = 0 as in Fig. 4.1. The
region bounded by the non-negativity constraints is the first quadrant in which x1≥0
and x2 ≥0. The region bounded by the constraint 3x1+x2 ≤300 is the region OCD (it is
easily seen that since the origin x1 = 0, x2 = 0 satisfies this constraint, the region to the
48
left of the line CD in which the origin lies is the feasible region for this constraint).
Similarly, the region OAB is the feasible region for the constraint X1+X2 ≤ 200.
Thus, the feasible region for the problem taking all constraints into account is OAPD,
where P is the point of intersection of the lines AB and CD. Any point within or on the
boundary of the region, OAPD, is a feasible solution to the problem. The optimal
solution, however, is that point which gives the maximum value of the objective
function, z, within or on the boundary of the region OAPD.
Next, consider a line for objective function, z=2x 1+x2=c, for an arbitrary value c. The
line shown in the figure is drawn for c= 40 and the arrows show the direction in which
lines parallel to it will have higher value of c, i.e. if the objective function line is plotted
for two different values of c, the line with a higher value of c plots farther from the
origin than the one with a lower value (of c). We need to determine that value of c
corresponding to a line parallel to 2x1+x2=c, farthest point from the origin and at the
same time passing through a point lying within or on the boundary of the feasible
region. If the z line is moved parallel to itself away from the origin, the farthest point
on the feasible region that it touches is the point P (50, 150). This can be easily seen
by an examination of the slopes of the z line and the constraint lines.
Since the slope of the z line is -2 which lies between -3 (slope of the line
3x1+2x2=300) and -1 (slope of the line x1+x2=200), the farthest point, in the feasible
region away from the origin, lying on a line parallel to the z line. Thus, the point
P(x1=50, x2=150) presents the optimal solution to the problem. The maximized net
benefit z= Rs 250.
The graphical method can be used only with a two variable problem. For a general
LP problem, the most common method used is the simplex method.
Terminology:
Solution : A set of values assigned to the variables in a given problem is referred to as
a solution. A solution, in general, may or may not satisfy any or all of the constraints.
Basis and basic variables: The basis is the set of basic variables. The number of basic
variables is equal to the number of equality constraints. The variables in the basis
only can be non-negative. The non-basic variables are zeros. In the optimal solution of
the example mentioned earlier, out of the total of four variables x 1, x2, x3 and x4, the
49
variables x1 and x2 are in the basis, and x3 and x4 are out of the basis, i.e. x1 and x2 are
basic variables and x3 and x4 are non-basic variables in the optimal solution.
Nonbasic variables: variables which are outside (or not in) the basic are non-basic
variables. x3 and x4 in the optimal solution of the example are non-basic variables.
Feasible solution: Any solution (set of values associated with each variable) that
satisfies all the constraints is a feasible solution.
Basic solution: Assume there are a total number of n+m variables (n decision
variables and m slack variables) and total number of m equality constraints. Then a
basic solution is one which has m number of basic variables and n number of non-
basic variables. All non-basic variables and at least n zero valued variables.
Basic feasible solution: A basic solution which is also feasible is a basic feasible
solution.
Initial basic feasible solution: The basic feasible solution used as an initial solution
in the simplex method is called an initial basic feasible solution.
B. Simplex Method
Prelude to Simplex Method
Let us look upon the solution as one resulting from the following set of simultaneous
equations from Example 4.1:
Where, x3 and x4 are the slack variables in the respective constraints, Eq. (4.3) and Eq.
(4.4), introduced to facilitate equality of the left and right hand sides.
50
Equations 4.3 and 4.4 have four variables in two equations. These can be
uniquely solved when two of the variables x1, x2, x3 and x4 assume zero values. Our
aim is to look for such a combination of x1, x2, x3 and x4 satisfying Eqs. 4.3 and 4.4,
which make the objective function z=2x1+x2+ 0.x3+0.x4 a maximum. If we assume two
of these variables to be zero, then the remaining two can be solved from the two
equations. In general, if there are m number of equality constraints and n+m total
number of values (including slack variables), we can solve for any m of the n+m
variables if we assign zero value to each of the remaining n variables. In the starting
solution for the simplex method, we assign zero values to the n decision variables and
the remaining m variables are solved from m simultaneous equations. In the example,
we now need a search procedure to determine the optimal combinations of the four
variables x1, x2, x3 and x4 that maximizes the value of the objective function, z. This is
done by iterating the starting solution to move to that adjacent corner point solution,
which results in the best value of z, in the simplex method. In any corner point
solution, it may be noted that there can be at most m number of nonnegative variables
and at least n number of zero valued variables. This is the second important feature
implicit in the simplex method.
The example 4.1 will now be solved using the simplex method
Maximize z = 2x1+ x2
Subject to
3x1 + x2 300
x1 + x2 200
x 1, x 2 0
First introduce slack variables (non-negative) and convert the constraints into equality
constraints.
Maximize z=2x1+x2+ 0.x3+0.x4
Subject to
3x1+x2+x3 300
x1+x2+x4= 200
x1, x2, x3, x4 0
Here, the total number of variables, m=4, and the number of constraints, n=2.
Therefore two of these four variables have to be necessarily set to zero to enable us to
solve the two equations to determine the remaining two variables. For example,
51
consider the solution (x1, x2, x3 and x4)=(25,25,200,150). This solution is feasible
solution, but not a basic solution (verify). On the other hand, the solution (100, 25,
0,0) is a basic feasible solution (because it has at least two zero valued variables and
the solution satisfies both constraints).
The procedure is explained by means of the simplex tableau, Table 4.1. For
convenience, the objective function is also written as an equality constraint as follows:
Table 4.1 shows the basic variables under the column ‘Basis’. The last column ‘RHS’
in each row gives the value of the basic variable in the current solution. The elements
in each row connecting these two columns are the coefficients of the variables in the
constraint represented by that row. Two important features of this table are:
52
Table 4.1 Starting solution
53
54
55
Example 4.2 (Multiple solution)
Table 4.2 shows the iterations required to arrive at the final simplex table,
giving the optimal solution using the simplex method.
Table 4.2 Starting solution
56
4.4 Dynamic Programming
4.4.1 Introduction
57
from the source node A, We can reach one of the nodes B1, B2 or B3, in stage 1 and one
of the nodes C1, C2, or C3 in stage 2 and one of the nodes D1, D2 or D3 in stage 3, and
reach the destination node E in stage 4. Using forward recursion, we define the state
of the system, Sn, at a stage n (n=1,2,3,4) as the node reached in that stage , and the
decision variable as the node xn in the previous stage, n-1, from which the nodes Sn is
reached . we look for the node xn, out of all possible xn, which results in the shortes
distance from the source to the node Sn, We denote this node as xn*. In stage n , we
answer the question , “If we are in node Sn, then which node in the previous stage n-1
we must have come from so that the total distance up to the node Sn, starting from the
source node A is minimum?”
58
59
4.5 Optimization using calculus
Some basic concepts and rules of optimization of a function of a single variable and a
function of multiple variables are presented in this section.
Local Maximum:
The function f(x) is said to have a local maximum at x1 and x4, where it has a value
higher than that at any other value of x in the neighbourhood of x 1 and x4. The
function is a local maximum at x1, if
f ( x1 x1 ) f ( x1 ) f ( x1 x1 )
60
Local Minimum
The function f(x) is said to have a local minimum at x 2 and x5, where it has a value
lower than that at any other value of x in the neighbourhood of x 2 and x5.
The function is a local minimum at x2, if
f ( x2 x2 ) f ( x2 ) f ( x2 x2 )
Saddle Point
The function has a saddle point at x, where the value of the function is lower on one
side of x3 and higher on the other, compared to the value at x3. The slope of the
function at x3 is zero.
f ( x3 x3 ) f ( x3 ) f ( x3 x3 ) ; slope of f(x) at x = x3 is zero.
Global Maximum
The function f(x) is a global maximum at x 4 in the range a <x < b, where the value of
the function is higher than that at any other value of x in the defined range.
Global Minimum
The function f(x) is a global minimum at x 2 in the range a <x < b, where the value of
the function is lower than that at any other value of x in the defined range.
Convexity
A function is said to be strictly convex, if a straight line connecting any two points
on the function lies completely above the function. Consider the function f(x) in
Fig. 4.4.
61
f(x) is said to be convex if the line AB is completely above the function (curve AB). Note
that the value of x for any point n between A and B can be expressed as ax1 + (1 - a)
x2, for some value of a, such that 0 ≤ a≤1. Therefore, f(x) is said to be strictly convex if,
f[ax1+(l - a)x2]< af(x1) + (l - a)f(x2); where 0 ≤ a≤ 1
1. If the inequality sign < is replaced by ≤ sign, then f(x) is said to be convex, but not
strictly convex.
2. If the inequality sign < is replaced by = sign, f(x) is a straight line and satisfies the
condition for convexity mentioned in 1 above. Therefore, a straight line is a convex
function.
Concavity
A function is said to be strictly concave if a straight line connecting any two points
on the function lies completely below the function. Consider the function(x), in
Fig. 4.5.
A function f(x) is strictly concave, if the line AB connecting any two points A and B on
the function is completely below the function (Curve AB).
1. If the inequality > is replaced by ≥, then f(x) is said to be concave, but not strictly
concave.
62
2. If the inequality > is replaced by = sign, then f(x) is a straight line still satisfying the
condition for concavity. Therefore a straight line is a concave function.
3. If a function is strictly concave, its slope decreases continuously, or d 2f /d2x<0. For
a concave function, however d2f /d2x<0.
It may be noted that a straight line is both convex and concave, and is neither
strictly convex nor strictly concave.
A local minimum of a convex function is also its global minimum.
A local maximum of a concave function is also its global maximum.
The sum of (strictly) convex functions is (strictly) convex.
The sum of (strictly) concave functions is (strictly) concave.
If f(x) is a convex function, -f(x) is a concave function.
If f(x) is a concave function, -f(x) is a convex function.
In general, if f(x) is a convex function, and a is a constant, af(x) is convex, if a> 0 and
af(x) is concave if a<0.
63
2. If n is odd, xo is a saddle point.
Where: I is an identity matrix, and λ is the vector of Eigen values. The function
f(X) is said to be positive definite if all its Eigen values are positive, i.e. all the values of
λ should be positive. Similarly, the function f(X) is said to be negative definite if all its
Eigen values are negative, i.e. all the values of λ should be negative.
64
Whether the function is a minimum or maximum at X = X1 depends on the nature of
the Eigen values of its Hessian matrix evaluated at X0.
1. If all Eigen values are positive at X0, X0 is a local minimum. If all Eigen values are
positive for all possible values of X, then X0 is a global minimum.
2. If all Eigen values are negative at X0, X0 is a local maximum. If all Eigen values are
negative for all possible values of X, then X0 is a global maximum.
3. If some Eigen values are positive and some negative or some are zero, then X 0 is
neither a local minimum nor a local maximum.
Example 4.2: Examine the following functions for convexity, concavity and then
determine their values at the extreme points.
1. f(X)= x12+x22-4x1-2x2+5
Solution:
First determine the Hessian Matrix.
df/dx1= 2x1 - 4 =0→x1=2
df/dx2 = 2x2 - 2 =0→x2=1
Check f(x) 2, 1= 22+12-8-2+5=0
d2f/dx12 = 2, d2f/dx1dx2= 0 , d2f/dx2dx1= 0, d2f/dx22= 2
Therefore, H f(x) =
The Eigen values are λ1= 2, λ2 = 2. As both the Eigen values are positive, the function
is a convex function (strictly convex). Also, as the Eigen values do not depend on the
value of x1 or x2, the function is strictly convex.
The stationary points are given by solving
65
Therefore the function f(X) has a global minimum at X = (2, 1).
Example 4.3: Examine the following functions for convexity, concavity and then
determine their values at the extreme points.
Solution:
That is X = (-2, 0) and is a global maximum. The function f(X) has a global maximum
at X = (-2, 0) equal to -4.
Example 4.4: Examine the following functions for convexity, concavity and then
determine their values at the extreme points.
66
Eigen values are given by the equation
Therefore λ1 =6x1 and λ2 = 6x2. That is, if both x1 and x2 are positive, then both Eigen
values are positive, and f(X) is convex; or if both x 1 and x2 are negative, then both
Eigen values are negative, and f(X) is concave.
Stationary points:
Therefore,
(i) f(X) has a local minimum at (x1, x2) = (1, 2), equal to13+ 23-3 (1) - 12(2) + 20 =
2. f min(x) = 2 at X (1, 2).
(ii) f(X) has a local maximum at (x1, x2) = (-1, -2) equal to (-l)3 + (-2)3-3 (-1) -12 (-2) + 20
= 38. fmax(X) = 38 at X = (-1, -2). At the points (1, -2) and (-1, 2), the function is neither
convex nor concave. They are saddle points.
Constrained Optimization
We shall discuss in this section the conditions under which a function of multiple
variables will have a local maximum or a local minimum, and those under which its
local optimum also happens to be its global optimum. Let us first consider a function
with equality constraints.
67
1. Function f(X) of n-Variables with a Single Equality Constraint
Maximize or Minimize f(X)
Subject to g(X) = 0
Note that f(X) and g(X) may or may not be linear.
We shall write down the Lagrangean of the function f(X) denoted by Lf(X, λ), and apply
the Lagrangean multiplier method.
Lf(X) =f(X) - λ g(X), where λ is a Lagrangean multiplier.
When g (K) = 0, optimizing Lf(X) is the same as optimizing f(X). The original problem of
constrained optimization is now transformed into an unconstrained optimization
problem (through the introduction of an additional variable, the Lagrangean
multiplier).
The (n + m) simultaneous equations are solved to get a solution, (X0, λ0). Let the
second partial derivatives be denoted by:
68
This is a polynomial in p of order (n - m) where n is the number of variables and m is
the number of equality constraints. If each root of p in the equation |D| = 0 is
negative, the solution X0 is a local maximum. If each root is positive, then X0 is a local
minimum. If some roots are positive and some negative, X0 is neither a local maximum
nor a local minimum. Also, if all the roots are negative and independent of X, then X 0
is the global maximum. If all the roots are positive and independent of X, then X 0 is
the global minimum.
Example 4.5:
69
Or 2μ +4 = 0 giving μ = -2.
As the only root is negative, the stationary point x= (2, 2) is a local maximum of
f(X) and fmax(X) = -8.
Kuhn-Tucker Conditions
The conditions mentioned above lead to the statement of Kuhn-Tucker conditions.
These conditions are necessary for a function f(X) to be a local maximum or a local
minimum. The conditions for a maximization problem are given below.
Maximize f(X)
70
Subject to gj(X) ≤ 0, j= 1...m.
The conditions are as follows:
In addition if f(X) is concave, and the constraints form a convex set, these conditions
are sufficient for a global maximum.
General Problem
The necessary and sufficient conditions for optimization of a function of multiple
variables subject to a set of constraints are discussed below.
A general problem may be one of maximization or minimization with equality
constraints, and inequality constraints of both ≥ and ≤ type.
Consider the problem:
Introduce variables s into the inequality constraints to make them equality constraints
or equations. Let S denote the vector with elements sj.
The Lagrangean is
71
Sufficiency Conditions for a Maximum
f(X) should be a concave function.
gi (X) should be concave; λi>= 0, i= 1, 2…. j
gi(X) should be linear; λi<=0, i=j+1…k
gi(x) is linear, λi unrestricted, i = k + 1… m.
Note: For a maximum or a minimum, the feasible space or the solution space should
be a convex region. A constraint set g,(X) ≤ 0 defines a convex region, if gi(X) is a
convex function for all i. Similarly, a region defined by a constraint set gi(X) ≥ 0 is a
convex region, if gi(X) is a concave function for all i.
It is practically better to stick to one set of criteria, i.e. either for maximization or
minimization. We shall follow the criteria for maximization in the following examples
while testing the sufficiency criterion. For this purpose, we shall convert the given
problem to the following form:
Maximize f(X)
Subjected to gi(X) ≤ 0
We shall reiterate here that a linear function is both convex and concave.
Example 4.6:
72
λ1= 2; λ2 = 2, both being positive.
Thus f(X) is a convex function (strictly convex). Therefore the function
-f(X) is concave and can be maximized. First convert the problem to a form
Maximize f(X)
Subject to g(X) ≤0
The original problem is rewritten as:
(i) Assuming λ2=O, sl =O; x1=8/5, x2=6/5 and λ1= 4/5>0, s22=3/5>0
Here the conditions for a maximum are satisfied. No violations.
(ii) Assume λ1 = 0 and λ2 = 0.
Then the simultaneous equations give
x1= x2 = 2; s12= -2 (not possible)
s22 = -l (not possible).
This is not a solution to the problem. Similarly,
(iii) Assume. λ1 = 0 and s2 = 0. The equations to be solved are:
73
Note: In a clear case like this, when f(X) is strictly convex or -f(X) is strictly concave
and the solution set is convex (i.e. the constraint set is a convex region being bounded
by linear functions), there is a unique solution.
That is, only a particular combination of λ and s yields the optimum solution.
Thus, in a given trial in a problem such as Example 4.5, with two constraints:
If λ1 and λ2 are assumed to be zero, then s12 and s22 should both be positive,
If λ1 and s2 are assumed to be zero, then λ2 and s12 should both be positive,
If λ2 and s1 are assumed to be zero, then λ1 and s22 should both be positive,
If s1 and s2 are assumed to be zero, then λ1 and λ2 should both be positive.
The first trial, which satisfies these conditions, will be the optimal solution to the
problem, and the computations can stop there.
Problem 1
74
75