FALLSEM2023-24 SWE1002 TH VL2023240103269 2023-10-25 Reference-Material-III
FALLSEM2023-24 SWE1002 TH VL2023240103269 2023-10-25 Reference-Material-III
Geometric Programming
8.1 INTRODUCTION
Geometric programming is a relatively new method of solving a class of nonlinear
programming problems. It was developed by Duffin, Peterson, and Zener [8.1]. It is
used to minimize functions that are in the form of posynomials subject to constraints of
the same type. It differs from other optimization techniques in the emphasis it places on
the relative magnitudes of the terms of the objective function rather than the variables.
Instead of finding optimal values of the design variables first, geometric programming
first finds the optimal value of the objective function. This feature is especially advan-
tageous in situations where the optimal value of the objective function may be all that
is of interest. In such cases, calculation of the optimum design vectors can be omitted.
Another advantage of geometric programming is that it often reduces a complicated
optimization problem to one involving a set of simultaneous linear algebraic equations.
The major disadvantage of the method is that it requires the objective function and
the constraints in the form of posynomials. We will first see the general form of a
posynomial.
8.2 POSYNOMIAL
In an engineering design situation, frequently the objective function (e.g., the total cost)
f (X) is given by the sum of several component costs Ui (X) as
f (X) = U1 + U2 + · · · + UN (8.1)
In many cases, the component costs Ui can be expressed as power functions of the
type
a a
Ui = ci x1 1i x2 2i · · · xnani (8.2)
where the coefficients ci are positive constants, the exponents aij are real constants
(positive, zero, or negative), and the design parameters x1 , x2 , . . . , xn are taken to be
positive variables. Functions like f , because of the positive coefficients and variables
and real exponents, are called posynomials. For example,
492 Engineering Optimization: Theory and Practice, Fourth Edition Singiresu S. Rao
Copyright © 2009 by John Wiley & Sons, Inc.
8.4 Solution Using Differential Calculus 493
N
a a a a −1 a a
= (cj x1 1j x2 2j · · · xk−1
k−1,j
akj xk kj k+1,j
ak+1 · · · xnnj ) = 0,
j =1
k = 1, 2, . . . , n (8.4)
494 Geometric Programming
N
= akj Uj (X) = 0, k = 1, 2, . . . , n (8.5)
j =1
we have to solve the n equations given by Eqs. (8.4), simultaneously. To ensure that the
point X∗ corresponds to the minimum of f (but not to the maximum or the stationary
point of X), the sufficiency condition must be satisfied. This condition states that the
Hessian matrix of f is evaluated at X∗ :
∂ 2f
JX∗ =
∂xk ∂xl X∗
must be positive definite. We will see this condition at a latter stage. Since the vector
X∗ satisfies Eqs. (8.5), we have
N
akj Uj (X∗ ) = 0, k = 1, 2, . . . , n (8.6)
j =1
After dividing by the minimum value of the objective function f ∗ , Eq. (8.6) becomes
N
∗j akj = 0, k = 1, 2, . . . , n (8.7)
j =1
Uj (X∗ ) Uj∗
∗j = = (8.8)
f∗ f∗
and denote the relative contribution of j th term to the optimal objective function. From
Eq. (8.8), we obtain
N
∗j = ∗1 + ∗2 + · · · + ∗N
j =1
1
= (U ∗ + U2∗ + · · · + UN∗ ) = 1 (8.9)
f∗ 1
8.4 Solution Using Differential Calculus 495
Equations (8.7) are called the orthogonality conditions and Eq. (8.9) is called the
normality condition. To obtain the minimum value of the objective function f ∗ , the
following procedure can be adopted. Consider
jN=1 ∗j ∗ ∗ ∗
f ∗ = (f ∗ )1 = (f ∗ ) = (f ∗ )1 (f ∗ )2 · · · (f ∗ )N (8.10)
Since
U1∗ U2∗ UN∗
f∗ = = = · · · = (8.11)
∗1 ∗2 ∗N
from Eq. (8.8), Eq. (8.10) can be rewritten as
∗ ∗ ∗ ∗ ∗ ∗N
U1 1 U2 2 UN
f∗ = · · · (8.12)
∗1 ∗2 ∗N
By substituting the relation
n
Uj∗ = cj (xi∗ )aij , j = 1, 2, . . . , N
i=1
∗j ∗
N N n
j
cj
= (xi∗ )aij
∗j
j =1 j =1 i=1
∗j n
N
cj N
a ∗
= (xi∗ ) j =1 ij j
∗j
j =1 i=1
N ∗j
cj
= (8.13)
∗j
j =1
since
N
aij ∗j = 0 for any i from Eq. (8.7)
j =1
Thus the optimal objective function f ∗ can be found from Eq. (8.13) once ∗j are
determined. To determine ∗j (j = 1, 2, . . . , N ), Eqs. (8.7) and (8.9) can be used. It
can be seen that there are n + 1 equations in N unknowns. If N = n + 1, there will
be as many linear simultaneous equations as there are unknowns and we can find a
unique solution.
496 Geometric Programming
Sufficiency Condition. We can see that ∗j are found by solving Eqs. (8.7) and (8.9),
which in turn are obtained by using the necessary conditions only. We can show that
these conditions are also sufficient.
The simultaneous solution of these equations will yield the desired quantities xi∗ (i =
1, 2, . . . , n). It can be seen that Eqs. (8.14) are nonlinear in terms of the variables
x1∗ , x2∗ , . . . , xn∗ , and hence their simultaneous solution is not easy if we want to solve
them directly. To simplify the simultaneous solution of Eqs. (8.14), we rewrite them as
∗j f ∗
= (x1∗ )a1j (x2∗ )a2j · · · (xn∗ )anj , j = 1, 2, . . . , N (8.15)
cj
By taking logarithms on both the sides of Eqs. (8.15), we obtain
∗j f ∗
ln = a1j ln x1∗ + a2j ln x2∗ + · · · + anj ln xn∗ ,
cj
j = 1, 2, . . . , N (8.16)
By letting
wi = ln xi∗ , i = 1, 2, . . . , n (8.17)
Eqs. (8.16) can be written as
f ∗ ∗1
a11 w1 + a21 w2 + · · · + an1 wn = ln
c1
∗ ∗
f 2
a12 w1 + a22 w2 + · · · + an2 wn = ln
c2 (8.18)
..
.
f ∗ ∗N
a1N w1 + a2N w2 + · · · + anN wn = ln
cN
8.4 Solution Using Differential Calculus 497
These equations, in the case of problems with a zero degree of difficulty, give a unique
solution to w1 , w2 , . . . , wn . Once wi are found, the desired solution can be obtained as
Example 8.1 It has been decided to shift grain from a warehouse to a factory in an
open rectangular box of length x1 meters, width x2 meters, and height x3 meters. The
bottom, sides, and the ends of the box cost, respectively, $80, $10, and $20/m2 . It costs
$1 for each round trip of the box. Assuming that the box will have no salvage value,
find the minimum cost of transporting 80 m3 of grain.
where x1 , x2 , and x3 indicate the dimensions of the box, as shown in Fig. 8.1. By
comparing Eq. (E1 ) with the general posynomial of Eq. (8.1), we obtain