INSTRUCTOR:
CHE 502: PROCESS
OPTIMIZATION Ajiboye
LESSON 5 -
Saheeb
OPTIMIZATION OF OSUNLEKE,
UNCONSTRAINED PhDDepartment of
FUNCTIONS: ONE-DIMENSIONAL Chemical
SEARCH Engineering, Obafemi
Awolowo University,
Ile-Ife, Nigeria
A good technique for the optimization of
a function of just one variable is
essential for two reasons:
OPTIMIZATION
1. Some unconstrained problems
OF inherently involve only
UNCONSTRAINED one variable.
FUNCTIONS: 2. Techniques for unconstrained
ONE- and constrained
DIMENSIONAL optimization problems
SEARCH generally involve repeated use of a
one- dimensional search.
Prior to the advent of high-speed
computers, methods of optimization
were limited primarily to analytical
methods, that is, methods of calculating
Modern computers have made possible
iterative, or numerical, methods that
search for an extremum by using
OPTIMIZATION function and sometimes derivative
OF values of f(x) at a sequence of trial
UNCONSTRAINED points xl, x2,.. ..
FUNCTIONS: As an example consider the following
ONE- function of a single variable x (see
DIMENSIONAL Figure 5.1).
SEARCH
An analytical method of finding x* at the
minimum of f(x) is to set the gradient of
f(x) equal to zero
… and solve the resulting equation to get
x* = 1; x* can be tested for the sufficient
conditions to ascertain that it is indeed a
OPTIMIZATION minimum:
OF
UNCONSTRAINED
FUNCTIONS:
To carry out an iterative method of
ONE-
numerical minimization, start with some
DIMENSIONAL initial value of x, say x0 = 0, and
SEARCH calculate successive values of f(x) = x2-
2x + 1and possibly df/dx for other
values of x, values selected according to
whatever strategy is to be employed.
A number of different strategies are
discussed in subsequent sections of this
…
OPTIMIZATION
OF where the superscript k designates the
UNCONSTRAINED iteration number and 1 and 2 are the
FUNCTIONS: prespecified tolerances or criteria of
ONE- precision.
DIMENSIONAL If f(x) has a simple closed-form
SEARCH expression, analytical methods yield an
exact solution, a closed form expression
for the optimal x, x*.
If f(x) is more complex, for example, if it
requires several steps to compute, then
a numerical approach must be used.
Software for nonlinear optimization is
now so widely available that the
OPTIMIZATION numerical approach is almost always
OF used.
UNCONSTRAINED For example, the "Solver" in the
FUNCTIONS: Microsoft Excel spreadsheet solves
ONE- linear and nonlinear optimization
DIMENSIONAL problems.
SEARCH Analytical methods are usually
difficult to apply for nonlinear
objective functions with more than
one variable.
For example, suppose that the
nonlinear function f(x) =f(xl,
OPTIMIZATION
OF
UNCONSTRAINED
FUNCTIONS:
ONE-
DIMENSIONAL
SEARCH
Each of the partial derivatives when
equated to zero may well yield a
OPTIMIZATION nonlinear equation.
OF Hence, the minimization of f(x) is
UNCONSTRAINED converted into a problem of solving a
FUNCTIONS: set of nonlinear equations in n
ONE- variables, a problem that can be just
DIMENSIONAL as difficult to solve as the original
problem.
SEARCH
Thus, most engineers prefer to attack
the minimization problem directly by
one of the numerical methods
described in Lesson 6, rather than to
use an indirect method.
One method of optimization for a
function of a single variable is to set
up as fine a grid as you wish for the
NUMERICAL values of x and calculate the function
METHODS FOR value for every point on the grid.
OPTIMIZING A An approximation to the optimum is
FUNCTION the best value of f(x).
OF ONE Although this is not a very efficient
VARIABLE method for finding the optimum, it
can yield acceptable results.
On the other hand, if we were to
utilize this approach in optimizing a
multivariable function of more than,
say, five variables, the computer time
In selecting a search method to
minimize or maximize a function of a
NUMERICAL single variable, the most important
METHODS FOR concerns are software availability,
OPTIMIZING A ease of use, and efficiency.
FUNCTION Sometimes the function may take a
OF ONE long time to compute, and then
VARIABLE efficiency becomes more important.
For example, in some problems, a
simulation may be required to
generate the function values, such as
in determining the optimal number of
trays in a distillation column.
In such circumstances, efficiency is a
BISECTION METHOD
The bisection method is an
incremental search method in which
NUMERICAL the interval is always divided in half. If
METHODS FOR a function changes sign over an
OPTIMIZING A interval [a, b], the function value at
FUNCTION the midpoint is evaluated.
The location of the root is then
OF ONE
determined as lying within the
VARIABLE
subinterval where the sign change
occurs.
The subinterval then becomes the
interval for the next iteration. The
process is repeated until the root is
known to the required precision.
NUMERICAL
METHODS FOR
OPTIMIZING A
FUNCTION
OF ONE
VARIABLE
Fig. 5.1: Graphical depiction of Bisection Method
BISECTION METHOD – Problem
Statement
The underlying mathematics for this
NUMERICAL method is contained Intermediate Value
METHODS FOR Theorem (IVT).
OPTIMIZING A If the function f is continuous on [a, b]
FUNCTION and k is any number lying between f(a)
OF ONE and f(b) then there is a point C
somewhere in (a, b) such that f (c) = k.
VARIABLE
For the equation f(x) = 0, we use k= 0;
then the IVT tells us that if f is
continuous on [a, b] and f(a) and f(b)
have different signs, then there is a
solution of f(x) = 0 between a and b.
However, there might be more than one.
BISECTION METHOD – Computational
Steps
NUMERICAL
METHODS FOR
OPTIMIZING A
FUNCTION
OF ONE
VARIABLE
BISECTION METHOD – Interval
Halving to Bracket the Root
NUMERICAL
METHODS FOR
OPTIMIZING A
FUNCTION
OF ONE
VARIABLE
BISECTION METHOD – Interval
Halving to Bracket the Root
NUMERICAL
METHODS FOR
OPTIMIZING A
FUNCTION
OF ONE
VARIABLE
BISECTION METHOD – The Bisection
Method Procedures for solving f(x) =
0
NUMERICAL Given the function f defined on [a, b]
METHODS FOR satisfying f(a)f(b) < 0.
OPTIMIZING A
FUNCTION
OF ONE
VARIABLE
BISECTION METHOD – Stopping
Criteria for the Algorithm
NUMERICAL
METHODS FOR
OPTIMIZING A
FUNCTION
OF ONE
VARIABLE
BISECTION METHOD – Solving for f(x)
=0
Solving f(x) = x3 + 4x2 − 10 = 0
NUMERICAL Show that f(x) = x3 + 4x2 − 10 = 0 has a
METHODS FOR root in [1, 2] and use the Bisection
OPTIMIZING A method to determine an approximation
FUNCTION to the root that is accurate to at least
OF ONE within 10−4.
VARIABLE
Relative Error Test
BISECTION METHOD – Solution of f(x)
=0
Solution
NUMERICAL Because f(1) = −5 and f(2) = 14, the
METHODS FOR Intermediate Value Theorem ensures that
OPTIMIZING A this continuous function has a root in [1, 2].
For the first iteration of the Bisection
FUNCTION
method we use the fact that at the
OF ONE midpoint of [1, 2] we have f(1.5) = 2.375 >
VARIABLE 0.
This indicates that we should select the
interval [1, 1.5] for our second iteration.
Then we find that f(1.25) = −1.796875 so
our new interval becomes [1.25, 1.5],
whose midpoint is 1.375.
BISECTION METHOD – Solution of f(x)
=0
NUMERICAL
METHODS FOR
OPTIMIZING A
FUNCTION
OF ONE
VARIABLE
After 13 iterations, p13 = 1.365112305
approximates the root p with an error
|p − p13| < |b14 − a14| = |1.3652344 −
1.3651123| = 0.0001221
BISECTION METHOD – Solution of f(x)
=0
NUMERICAL
METHODS FOR
OPTIMIZING A
FUNCTION
OF ONE
VARIABLE
NEWTON'S METHOD – Solution of f(x)
=0
Recall that the first-order necessary
NUMERICAL condition for a local minimum is f '(x) =
METHODS FOR 0.
OPTIMIZING A Consequently, you can solve the
FUNCTION equation f '(x) = 0 by Newton's method
OF ONE to get
VARIABLE
making sure on each stage k that
f(xk+1) < f(xk) for a minimum.
Newton's method is equivalent to using
a quadratic model for a function in
NEWTON'S METHOD – Solution of f(x)
=0
The advantages of Newton's method
NUMERICAL are:
METHODS FOR 1. The procedure is locally quadratically
OPTIMIZING A convergent to the extremum as long as
FUNCTION f"(x) 0.
OF ONE 2. For a quadratic function, the minimum
VARIABLE is obtained in one iteration.
The disadvantages of the method are
3. You have to calculate both f'(x) and
f"(x).
4. If f"(x) 0, the method converges
slowly.
NEWTON'S METHOD – Solution of f(x)
=0
Example: Minimizing a more Difficult
NUMERICAL Function
METHODS FOR In this example we minimize a
OPTIMIZING A nonquadratic function f(x) = x4 - x + 1
FUNCTION that is illustrated in Figure E5.2a, using
OF ONE the Newton’s method as described in
the last slide.
VARIABLE
For a starting point of x = 3, minimize
f(x) until the change in x is less than10-
7
.
Solution
NEWTON'S METHOD – Solution of f(x)
=0
NUMERICAL
METHODS FOR
OPTIMIZING A
FUNCTION
OF ONE
VARIABLE
Additional iterations yield the following
values for x:
NEWTON'S METHOD – Solution of f(x)
=0
NUMERICAL
METHODS FOR
OPTIMIZING A
FUNCTION
OF ONE
VARIABLE
As you can see from the third and fourth
columns in the table the rate of
convergence of Newton's method is
GOLDEN-SECTION SEARCH
The Golden Section Search method is
used to find the maximum or minimum
NUMERICAL of a unimodal function. (A unimodal
METHODS FOR function contains only one minimum or
OPTIMIZING A maximum on the interval [a, b].)
FUNCTION To make the discussion of the method
OF ONE simpler, let us assume that we are
VARIABLE trying to find the maximum of a
function. The previously introduced
Equal Interval Search method is
somewhat inefficient because if the
interval is a small number it can take a
long time to find the maximum of a
function.
GOLDEN-SECTION SEARCH
As shown in Figure 5.2, choose three
points xl, x1 and xu (xl < x1 < xu along
NUMERICAL the x-axis with corresponding values of
METHODS FOR the function f(xl) , f(x1), and f(xu),
OPTIMIZING A respectively. Since f(xl) > f(x1), and f(xl)
FUNCTION > f(xu), the maximum must lie between
OF ONE xl and xu.
VARIABLE Now a fourth point denoted by x2 is
chosen to be between the larger of the
two intervals of [x1, xl] and [x1, xu].
Assuming that the interval [x1, xl] is
larger than [x1, xu], we would chose [x1,
x ] as the interval in which x is chosen.
NUMERICAL METHODS FOR OPTIMIZING A FUNCTION OF ONE VARIABLE
GOLDEN-SECTION SEARCH
… xl < x2 < x1; else if f(x2) <
f(x1), then the new three
points are x2 < x1 < xu.
This process is continued
until the distance between
the outer points is
sufficiently small.
Figure 5.2
GOLDEN-SECTION SEARCH
How are the intermediate points in
the Golden Section Search
NUMERICAL determined?
METHODS FOR We chose the first intermediate point xl
OPTIMIZING A to equalize the ratio of the lengths as
FUNCTION shown in Eq.(5.1) where a and b are
OF ONE distance as shown in Figure 5.3.
VARIABLE Note that a + b is equal to the distance
between the lower and upper boundary
points xl and xu. 5.
1
GOLDEN-SECTION SEARCH
How are the intermediate points in
the Golden Section Search
NUMERICAL determined?
METHODS FOR … are shown in Figure 5.4
OPTIMIZING A 5.
FUNCTION 2
OF ONE
VARIABLE
Figure 5.3
GOLDEN-SECTION SEARCH
How are the intermediate points in
the Golden Section Search
NUMERICAL determined?
METHODS FOR
OPTIMIZING A
FUNCTION
OF ONE
VARIABLE
Figure 5.4
GOLDEN-SECTION SEARCH
Does the Golden Section Search have
anything to do with the Golden
NUMERICAL Ratio?
METHODS FOR The ratios in Equations (5.1) and (5.2)
OPTIMIZING A are equal and have a special value
FUNCTION known as the Golden Ratio.
OF ONE The Golden Ratio has been used since
VARIABLE ancient times in various fields such as
architecture, design, art and
engineering.
To determine the value of the Golden
Ratio let
5.
R = a / b , then Eq. (5.1) can be3
GOLDEN-SECTION SEARCH
Does the Golden Section Search have
anything to do with the Golden
NUMERICAL Ratio?
METHODS FOR Using the quadratic formula, the
OPTIMIZING A positive root of Eq. (5.3) is
FUNCTION
OF ONE
VARIABLE 5.
4
In other words, the intermediate points x1
and x2 are chosen such that, the ratio of the
distance from these points to the
GOLDEN-SECTION SEARCH
Does the Golden Section Search have
anything to do with the Golden
NUMERICAL Ratio?
METHODS FOR
OPTIMIZING A
FUNCTION
OF ONE
VARIABLE
Figure 5.5
GOLDEN-SECTION SEARCH
What happens after choosing the
first two intermediate points?
NUMERICAL Next we determine a new and smaller
METHODS FOR interval where the maximum value of
OPTIMIZING A the function lies in.
FUNCTION We know that the new interval is either
OF ONE [xl, x2, x1] or [x2, x1, xu ].
VARIABLE To determine which of these intervals
will be considered in the next iteration,
the function is evaluated at the
intermediate points x2 and x1.
If f (x2) > f(x1), then the new region of
interest will be [x , x , x ]; else if f(x ) <
GOLDEN-SECTION SEARCH
What happens after choosing the
first two intermediate points?
NUMERICAL In Figure 5.5, we see that f(x2) > f(x1),
METHODS FOR therefore our new region of interest is
OPTIMIZING A [xl, x2, x1].
FUNCTION We should point out that the boundaries
OF ONE of the new smaller region are now
VARIABLE determined by xl and x1, and we already
have one of the intermediate points,
namely x2, conveniently located at a
point where the ratio of the distance to
the boundaries is the Golden Ratio.
All that is left to do is to determine the
GOLDEN-SECTION SEARCH
What happens after choosing the
first two intermediate points?
NUMERICAL This process of determining a new
METHODS FOR smaller region of interest and a new
OPTIMIZING A intermediate point will continue until the
FUNCTION distance between the boundary points
are sufficiently small.
OF ONE
VARIABLE
The Golden Section Search Algorithm
The following algorithm can be used to
determine the maximum of a function
f(x).
Initialization:
GOLDEN-SECTION SEARCH
The Golden Section Search Algorithm
Step 1:
NUMERICAL Determine two intermediate points x1
METHODS FOR
and x2 such that
OPTIMIZING A
FUNCTION
OF ONE
VARIABLE
Step 2:
Evaluate f(x1) and f(x2). If f(x1) > f(x2) ,
then determine new xl, x1, x2 and xu as
GOLDEN-SECTION SEARCH
The Golden Section Search Algorithm
Note that the only new calculation is
NUMERICAL done to determine the new x1.
METHODS FOR
OPTIMIZING A
FUNCTION
OF ONE
VARIABLE 5.
5
If f (x1) < f (x2), then determine new xl ,
x1, x2 and xu as shown in Equation set
(5.6).
Note that the only new calculation is
GOLDEN-SECTION SEARCH
The Golden Section Search Algorithm
NUMERICAL
METHODS FOR
OPTIMIZING A
5.
FUNCTION 6
OF ONE
VARIABLE Step 3:
If xu− xl< ε (a sufficiently small
number), then the maximum occurs at
and stop iterating, else go to Step
2.
GOLDEN-SECTION SEARCH
Example
Consider Figure 5.6 below. The cross-
NUMERICAL sectional area A of a gutter with equal
METHODS FOR base and edge length of 2 is given by
OPTIMIZING A Find the angle θ which maximizes the
FUNCTION cross-sectional area of the gutter. Using
OF ONE an initial interval of [0,π/ 2], find the
VARIABLE solution after 2 iterations. Use an initial
ε = 0.05.
Figure 5.6
GOLDEN-SECTION SEARCH
Solution
The function to be maximized is
NUMERICAL
METHODS FOR
OPTIMIZING A Iteration 1:
FUNCTION Given the values for the boundaries of xl
OF ONE = 0 and xu = /2 , we can calculate the
VARIABLE initial intermediate points as follows:
GOLDEN-SECTION SEARCH
Solution
NUMERICAL
METHODS FOR
OPTIMIZING A
FUNCTION
OF ONE The function is evaluated at the
VARIABLE intermediate points as f (0.9708) =
5.1654 and f(0.60000) = 4.1227.
Since f(x1) > f (x2 ), we eliminate the
region to the left of x2 and update the
lower boundary point as xl = x2.
The upper boundary point xu remains
GOLDEN-SECTION SEARCH
Solution
The second intermediate point x2 is
NUMERICAL updated to assume the value of x1 and
METHODS FOR finally the first intermediate point x1 is
OPTIMIZING A re-calculated as follows:
FUNCTION
OF ONE
VARIABLE
To check the stopping criteria the
difference between xu and xl is
calculated to be
GOLDEN-SECTION SEARCH
The process is repeated in the second
iteration.
NUMERICAL Iteration 2:
METHODS FOR The values for the boundary and
OPTIMIZING A intermediate points used in this
FUNCTION iteration were calculated in the previous
OF ONE iteration as shown below.
VARIABLE
Again the function is evaluated at the
GOLDEN-SECTION SEARCH
Since f(x1) < f (x2) , the opposite of the
case seen in the first iteration, we
NUMERICAL eliminate the region to the right of x1
METHODS FOR and update the upper boundary point as
OPTIMIZING A xu = x1.
FUNCTION The lower boundary point xl remains
OF ONE unchanged.
VARIABLE The first intermediate point x1 is
updated to assume the value of x2 and
finally the second intermediate point x2
is recalculated as follows:
GOLDEN-SECTION SEARCH
To check the stopping criteria the
difference between xu and xl is
NUMERICAL calculated to be
METHODS FOR
OPTIMIZING A
FUNCTION which is greater than ε = 0.05. At the
OF ONE end of the second iteration the solution is
VARIABLE
Therefore, the maximum area occurs
when θ = 0.9 radians or 51.60.
GOLDEN-SECTION SEARCH
The iterations will continue until the
stopping criterion is met.
NUMERICAL Summary results of all the iterations are
METHODS FOR shown in Table 5.1.
OPTIMIZING A Note that at the end of the 9th iteration,
FUNCTION ε < 0.05 which causes the search to
OF ONE stop.
VARIABLE The optimal value is calculated as the
average of the upper and lower
boundary points.
GOLDEN-SECTION SEARCH
which is about 59.680. The area of the
gutter at this angle is f(1.0416) = 5.1960 .
NUMERICAL The theoretical optimal solution to the
METHODS FOR problem happens at exactly 60° which is
OPTIMIZING A 1.0472 radians and an area of 5.1962.
FUNCTION
OF ONE
VARIABLE