GA Ex 2
GA Ex 2
ENGINEERING OPTIMIZATION
Winter 2022-23
Ponnambalam
S.G. Professor
(HAG)
School of Mechanical
Engineering VIT University
[email protected]
Introduction
• This topic deals with the various methods of solving the
unconstrained
minimization
problem: x 1
x
Find X 2 which minimizes f
⁝
(X)
x n
5
Classification of
unconstrained
minimization
Direct search methods methods
Indirect Search (Descent)
methods
• Random search • Steepest
method descent
• Grid search method (Cauchy
method)
• Univariate method • Fletcher-
• Pattern search Reeves
methods method
– Powell’s method • Newton’s
– Hooke-Jeeves method
method • Marquardt
• Rosenbrock’s method method
6
• Simplex method • Quasi-Newton
Direct search methods
• They require only the objective function values
but not the partial derivatives of the function
in finding the minimum and hence are often
called the nongradient methods.
• The direct search methods are also known as
zeroth- order methods since they use zeroth-
order derivatives of the function.
• These methods are most suitable for simple
problems
involving a relatively small numbers of variables.
• These methods are in general less efficient
than the descent methods.
7
Descent methods
• The descent techniques require, in addition to the
function values, the first and in some cases the
second derivatives of the objective function.
• Since more information about the function being
minimized is used (through the use of derivatives),
descent methods are generally more efficient than
direct search techniques.
• The descent methods are known as gradient methods.
• Among the gradient methods, those requiring only
first derivatives of the function are called first-
order methods; those requiring both first and
second derivatives of the function are termed
second-order methods. 8
General approach
• All unconstrained minimization methods are
iterative in nature and hence they start from
an initial trial solution and proceed toward the
minimum point in a sequential manner.
• 𝑋𝑖+1 = 𝑋𝑖 + λ∗𝑆𝑖 ……………………………(1)
• where 𝑋𝑖 is the starting point, 𝑆𝑖 is the search
direction, λ∗
and 𝑋𝑖+1 is the final point in iteration 𝑖.
is the optimal step length,
α - Step Length
Start Point @
(2, 1)
10
Concept of Unidirectional Search
• With 𝛼 =0.5, we get the
new
point as,
• 𝑋2 = 𝑋2 1++0.5
𝜆∗𝑆
2 3
• 𝑋2 =
1 = 3.
5 5
𝜆 =
at,2.103 is (6.207, 11.517)
• The optimal point is
𝑋
Substituting in T,
𝑡
which is the same as that
= found using geometric
(3.0, 3.5)T and 𝑆𝑡
properties. 11
Concept of Unidirectional Search
• Successive unidirectional search techniques to
find the minimum point along a particular
2 2
• 𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑓 𝑋 = 𝑥 2 + 𝑥− 11 + 𝑥 +𝑥 −72
search direction.
1 2 1
• Interval: 0 ≤ 𝑥1 ,
𝑥22 ≤5
• Figure in next slide explains how a successive
unidirectional search techniques will try to find the
minimum point along a particular search direction.
• This is an example of Box’s evolutionary optimization
method.
• It is a simple technique requires 2𝑁 + 1 points, of
which 2𝑁 are corner points of an N-dimensional
hypercube centered on the other point.
• Process continues by finding another hypercube
around the
best point.
Concept of Unidirectional Search
• Successive unidirectional search techniques to
plot for 𝑓 𝑋 = 𝑥 2 + 𝑥− + 𝑥1 + 𝑥 2 − 7
find the minimum point along a particular
2 2
11
search direction.
2
• Contour
1 2
13
Descent method
14
Steepest Descent Method
15
Example
16
Example
17
Example
18
Example
19
Example
20
Example
21