0% found this document useful (0 votes)
15 views21 pages

GA Ex 2

The document discusses unconstrained optimization techniques, specifically focusing on multivariable nonlinear programming and search methods. It outlines the importance of understanding unconstrained minimization problems, necessary conditions for relative minima, and classifies optimization methods into direct search and gradient-based techniques. Additionally, it explains the iterative nature of these methods and introduces concepts such as unidirectional search and descent methods.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views21 pages

GA Ex 2

The document discusses unconstrained optimization techniques, specifically focusing on multivariable nonlinear programming and search methods. It outlines the importance of understanding unconstrained minimization problems, necessary conditions for relative minima, and classifies optimization methods into direct search and gradient-based techniques. Additionally, it explains the iterative nature of these methods and introduces concepts such as unidirectional search and descent methods.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 21

BMEE211L

ENGINEERING OPTIMIZATION
Winter 2022-23

Multivariable Nonlinear programming-


Unconstrained optimization techniques
Search Method

Ponnambalam
S.G. Professor
(HAG)
School of Mechanical
Engineering VIT University
[email protected]
Introduction
• This topic deals with the various methods of solving the
unconstrained
minimization
problem: x 1 
x 
Find X  2  which minimizes f
⁝ 
(X)
  x n 

• It is true that rarely a practical design problem would be


unconstrained; still, a study of this class of problems would be
– The constraints
important do not have
for the following significant influence in
reasons:
certain design problems.
– Some of the powerful and robust methods of solving
constrained minimization problems require the use of
unconstrained minimization techniques.
– The unconstrained minimization methods can be used to solve
certain complex engineering analysis problems. For example,
the displacement response (linear or nonlinear) of any
structure under any specified load condition can be found by 2
minimizing its potential energy.
Necessary and Sufficient Conditions
𝐴 𝑝𝑜𝑖𝑛𝑡 𝑋∗𝑤𝑖𝑙𝑙 𝑏𝑒 𝑎 𝑟𝑒𝑙𝑎𝑡𝑖𝑣𝑒
𝑚𝑖𝑛𝑖𝑚𝑢𝑚 𝑜𝑓 𝑓
𝑋𝜕
• 𝜕𝑥 𝑋 = 𝑋 = 0, 𝑖 = 1,2, … . . , 𝑛
𝑖𝑓 𝑡ℎ𝑒 𝑛𝑒𝑐𝑒𝑠𝑠𝑎𝑟𝑦 𝑐𝑜𝑛𝑑𝑖𝑡𝑖𝑜𝑛𝑠,

𝑓
𝑇ℎ𝑒𝑖
𝑎𝑟𝑒 𝑠𝑎𝑡𝑖𝑠𝑓𝑖𝑒𝑑.
𝑝𝑜𝑖𝑛𝑡 𝑋∗ 𝑖𝑠 𝑔𝑢𝑎𝑟𝑎𝑛𝑡𝑒𝑒𝑑 𝑡𝑜 𝑏𝑒 𝑎
𝑟𝑒𝑙𝑎𝑡𝑖𝑣𝑒
𝑚𝑖𝑛𝑖𝑚𝑢𝑚 𝑖𝑓𝑡ℎ𝑒 𝐻𝑒𝑠𝑠𝑖𝑎𝑛 𝑚𝑎𝑡𝑟𝑖𝑥 𝑖𝑠
𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒𝜕 2𝑑𝑒𝑓𝑒𝑛𝑖𝑡𝑒,
= = 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒
𝑡ℎ𝑎𝑡 𝑖𝑠,𝜕𝑥
𝐻 𝑓𝑖 𝑋
∗ 𝑑𝑒𝑓𝑒𝑛𝑖𝑡𝑒.

If𝑋the 𝑥𝑗
function is not differentiable, these
conditions are not applicable to identify the
optimum point. 3
Classical Optimization Techniques
and Search Techniques
• Whenever we want to optimize a multivariable function, the
efficiency of the optimization technique totally depends on
the nature of the objective function.
• If the objective function is convex, we will get the optimal
solution.
• But, it is very difficult to check whether the function is
convex or not.
• To check for convexity, we need to find the first order and
second order derivatives and do some calculations to
check for convexity.
• When the function is very complex getting first order and
second
order derivatives are very difficult to obtain.
• Moreover, if the function is not really continuous
(discontinuity in the function) within the domain, then also
it is difficult to obtain the extremum. 4
Classification of Unconstrained
Minimization Methods
Two distinct types of algorithms.
• Direct search methods use only objective
function values to locate the minimum
point, and
• Gradient-based methods use the first
and/or the second-order derivatives of
the objective function to locate the
minimum point.

5
Classification of
unconstrained
minimization
Direct search methods methods
Indirect Search (Descent)
methods
• Random search • Steepest
method descent
• Grid search method (Cauchy
method)
• Univariate method • Fletcher-
• Pattern search Reeves
methods method
– Powell’s method • Newton’s
– Hooke-Jeeves method
method • Marquardt
• Rosenbrock’s method method
6
• Simplex method • Quasi-Newton
Direct search methods
• They require only the objective function values
but not the partial derivatives of the function
in finding the minimum and hence are often
called the nongradient methods.
• The direct search methods are also known as
zeroth- order methods since they use zeroth-
order derivatives of the function.
• These methods are most suitable for simple
problems
involving a relatively small numbers of variables.
• These methods are in general less efficient
than the descent methods.
7
Descent methods
• The descent techniques require, in addition to the
function values, the first and in some cases the
second derivatives of the objective function.
• Since more information about the function being
minimized is used (through the use of derivatives),
descent methods are generally more efficient than
direct search techniques.
• The descent methods are known as gradient methods.
• Among the gradient methods, those requiring only
first derivatives of the function are called first-
order methods; those requiring both first and
second derivatives of the function are termed
second-order methods. 8
General approach
• All unconstrained minimization methods are
iterative in nature and hence they start from
an initial trial solution and proceed toward the
minimum point in a sequential manner.
• 𝑋𝑖+1 = 𝑋𝑖 + λ∗𝑆𝑖 ……………………………(1)
• where 𝑋𝑖 is the starting point, 𝑆𝑖 is the search
direction, λ∗
and 𝑋𝑖+1 is the final point in iteration 𝑖.
is the optimal step length,

• It is important to note that all the


unconstrained minimization methods
i. require an initial point
ii. differ from one another only in the
method of generating the new point and
in testing the new point for optimality 9
Concept of Unidirectional Search
• Many multivariable optimization techniques use
successive unidirectional search techniques to find
the minimum point along a particular search
direction.plot for 𝑓 𝑋 = 𝑥1 − 210 𝑥 2
+ 2
− 10
• Contour

α - Step Length

Minimum Point @ (10,


10)

Start Point @
(2, 1)
10
Concept of Unidirectional Search
• With 𝛼 =0.5, we get the
new
point as,
• 𝑋2 = 𝑋2 1++0.5
𝜆∗𝑆
2 3
• 𝑋2 =
1 = 3.
5 5
𝜆 =
at,2.103 is (6.207, 11.517)
• The optimal point is

• We could find this point


by considering the right-
angled triangle shown in
dotted line.

equation (1),𝜆we 2.103, 𝑋∗ = (7.206, 14.015)


= (2, 5)T
∗= obtain

𝑋
Substituting in T,

𝑡
which is the same as that
= found using geometric
(3.0, 3.5)T and 𝑆𝑡
properties. 11
Concept of Unidirectional Search
• Successive unidirectional search techniques to
find the minimum point along a particular
2 2
• 𝑚𝑖𝑛𝑖𝑚𝑖𝑧𝑒 𝑓 𝑋 = 𝑥 2 + 𝑥− 11 + 𝑥 +𝑥 −72
search direction.
1 2 1
• Interval: 0 ≤ 𝑥1 ,
𝑥22 ≤5
• Figure in next slide explains how a successive
unidirectional search techniques will try to find the
minimum point along a particular search direction.
• This is an example of Box’s evolutionary optimization
method.
• It is a simple technique requires 2𝑁 + 1 points, of
which 2𝑁 are corner points of an N-dimensional
hypercube centered on the other point.
• Process continues by finding another hypercube
around the
best point.
Concept of Unidirectional Search
• Successive unidirectional search techniques to

plot for 𝑓 𝑋 = 𝑥 2 + 𝑥− + 𝑥1 + 𝑥 2 − 7
find the minimum point along a particular
2 2
11
search direction.
2
• Contour
1 2

13
Descent method

14
Steepest Descent Method

15
Example

16
Example

17
Example

18
Example

19
Example

20
Example

21

You might also like