0% found this document useful (0 votes)
42 views6 pages

6e4f6lagrange Multiplier LN 5

This document discusses methods for solving constrained optimization problems, specifically: 1. The Lagrangian method, which defines a Lagrangian function as the objective function plus Lagrange multipliers multiplied by the constraints. Setting the partial derivatives of the Lagrangian to 0 provides the optimal solution. 2. For problems with both equality and inequality constraints, slack variables are introduced to transform inequalities into equalities. 3. The Karush-Kuhn-Tucker (KKT) conditions provide necessary conditions for an optimal solution when using Lagrange multipliers. 4. Examples are provided for quadratic objective functions and constraints. 5. The penalty function method is described for solving problems with both equality and inequality constraints by minimizing an augmented penalty function

Uploaded by

Vishnu Vipin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views6 pages

6e4f6lagrange Multiplier LN 5

This document discusses methods for solving constrained optimization problems, specifically: 1. The Lagrangian method, which defines a Lagrangian function as the objective function plus Lagrange multipliers multiplied by the constraints. Setting the partial derivatives of the Lagrangian to 0 provides the optimal solution. 2. For problems with both equality and inequality constraints, slack variables are introduced to transform inequalities into equalities. 3. The Karush-Kuhn-Tucker (KKT) conditions provide necessary conditions for an optimal solution when using Lagrange multipliers. 4. Examples are provided for quadratic objective functions and constraints. 5. The penalty function method is described for solving problems with both equality and inequality constraints by minimizing an augmented penalty function

Uploaded by

Vishnu Vipin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Constrained

Optimization
Lagrange Multiplier
M S Prasad

This lecture note is based on Text book and open literature suitable course Engineering system design
Optimization .It should be read in Conjunction with class room discussions.

Constrained Optimization : Lagrange


Multiplier : LN - 5
Simple approach
This is an example of the generic constrained optimization problem:
Maximize f(x), subject to g(x) = b .
Here f is to be maximized subject to constraints that are of two types. The constraint x X is a
regional constraint. For example, it might be x 0. The constraint g(x) = b is a functional
constraint. Sometimes the functional constraint is an inequality constraint, like g(x) b.
we can always add a slack variable, S and re-write it as the equality constraint g(x) + z = b, redefining the regional constraint as x X and z 0.
The Lagrangian method for Equality constraints
Case I The solution of a constrained optimization problem can often be found for the above
function by using the so-called Lagrangian method. We define the Lagrangian as
L(x, ) = f(x) + (b g(x)) .
Note
Assume x * = (x1*, X2*,.Xn*) maximizes or minimizes f(x) subject to the constraints gi(x) = bi,
for i = 1; 2; : : :;m.
Then either
(i) the vectors g1(x); g2(x_); .; gm(x_) are linearly dependent,
or
(ii) there exists a vector * = ( 1* , 2* .. m* ) such that L(x* , *) = 0.
That is

=0

and

=0

Case II Equality and Inequality Constraints


Problem definition : Maximize f(x) Subject to g1(x) = b1 ,... ,gm(x) = bm and

h1(x) d1 ... hp(x) dp. We have both equality and Inequality constraints .
we know that we can always transform an Inequality constraints as an Equality constraints by
introducing a slack variable (a non negative parameter ) i.e gi ( x) 0 is equivalent to
gi (X) + si (0) where S 0. By this process we have introduced one more variable and one
more constraint S 0. To solve this problem we use S2i i.e gi + S2i = 0
The S are calculated by making sure that j* 0 for J = 1 .m. also partial derivative of S is
equated to zero.
We form a Lagrangian as under

( , , ) = () + () + ( () + 2 )
1

We calculate

= 2 ..
1

=0

=0

it means { g ( x*) + S 2} = 0

Check if Sj2 0 equivalent to saying that gj 0

And
= 0 => 2 i* Sj = 0 known as Complimentary condition .

and
check i* 0 that is non negativity test.
These condition are known as KKT i.e. Karush Kuhn Tucker( KKT) necessary and sufficiency
condition .
In general, the Lagrangian is the sum of the original objective function and a term that involves
the functional constraint and a Lagrange multiplier . Geometric meaning of Lagrange
Function is that at minimum point ,gradient of cost function and constraint functions are
along the same line and they are proportional to each other by .
Since L(x* , *) = 0.
Hence f(x* ) + h(x* = 0.
Think of as knob that we can turn to adjust the value of x.
Imagine turning this knob until we find a value of , say = , such that the functional
constraint is satisfied, i.e., g(x()) = b. Let x = x(). Our claim is that x solves P. This is the socalled Lagrangian Sufficiency Theorem.
Lagrange multiplier for Quadratic constraints
The most general form of constraint optimization problem is formulated as under :-

where x has dimensions , nx1 , f(x) is the objective function to be minimized, g( x) are a set of
inequality constraints, and h(x) are a set of equality constraints. Inequality constraints of the form
can be rewritten as g(x) 0 To solve numerically the following standard forms are chosen

f(x) = xT A X + bTX +c
g(x) = Dx e
h(x) = Cx d

where A is an n x n matrix, b is an n1 vector, c is a scalar, D is a pn matrix, e is a px1 vector, C


is an m x n matrix, and d is an m1 vector. This general rep is generally used for Quadratic
minimization problems .

. We can form a Lagrange function as


1
(, ) = + + + ( )
2

= + + = 0

= = 0

This can be written in matrix form as below solved easily


{

} { } = [ ]

Penalty Function Method


The penalty function method can solve optimization problems with both equality and inequality
constraints:
minimize f(x)
subject to
gj (x) 0 I = 1.p
hi,(x) =0 j =1 m
The penalty function method applies an unconstrained optimization algorithm to a penalty function
formulation of a constrained problem. . The penalty function method is follows:

minimize P (x)

Where the Penalty parameters are : -

Now the problem is formulated as unconstrained optimization. However, we cannot solve directly
because large values of can cause instability and inefficiency when deriving a solution with high
accuracy. We can use the sequential unconstrained minimization technique (SUMT) to incrementally
increase the penalty parameter as we derive the solution incrementally.
1. Choose tolerances 1 , 2 very small such as 10 -5, starting point x0 =0, initial penalty
parameter 0=1
2.

Perform unconstrained optimization ( we can use fminsearch in matlab) as the penaly


function P ( x0 ,k) to get x*.

3. Fix a convergence criteria if


stop.

1 and

4. Else k+1= 10 k and X0 = X* k return to step 2.


Summary

2 then

---------------------------------------------------------------------------------------------------------

You might also like