0% found this document useful (0 votes)
11 views

NLP Lecture Note

This document discusses the solution of general nonlinear programming problems. It provides examples of constrained nonlinear programming problems and their solutions. It also discusses the necessary optimality conditions (KKT conditions) that must be satisfied for a point to be a locally optimal solution. These include feasibility of constraints, convexity of objective and constraint functions, complementary slackness, and stationarity of the Lagrangian. The document also provides examples to demonstrate checking if a point satisfies the optimality conditions to be a local solution.

Uploaded by

Vinay Dutta
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views

NLP Lecture Note

This document discusses the solution of general nonlinear programming problems. It provides examples of constrained nonlinear programming problems and their solutions. It also discusses the necessary optimality conditions (KKT conditions) that must be satisfied for a point to be a locally optimal solution. These include feasibility of constraints, convexity of objective and constraint functions, complementary slackness, and stationarity of the Lagrangian. The document also provides examples to demonstrate checking if a point satisfies the optimality conditions to be a local solution.

Uploaded by

Vinay Dutta
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Contents

1 Solution of General Nonlinear Programming Problem 3


1.0.1 Solution of CP . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.2 Exercise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

1
Chapter 1

Solution of General Nonlinear


Programming Problem

General form of an unconstrained optimization problem is

(U P ) : minn f (x), f : Rn → R
x∈R

General form of a constrained optimization problem is

(CP ) : min f (x)

subject to gi (x) ≤ 0, i = 1, 2, ..., m

hj (x) = 0, j = 1, 2, ..., k

x ∈ Rn , f, gi , hj : Rn → R

If all f, gi and hj are linear functions then (CP ) is linear programming problem. If
at least one of these is a nonlinear function then (CP ) is a nonlinear programming
problem. If all f, gi and hj are convex functions then (CP ) is a convex programming
problem.

Note 1.1. Definiteness of a symmetric real matrix A = (aij )n×n is determined as follows.
Suppose rank of A = r, signature of A = s. λi are Eigen values. Definiteness of the
quadratic form xT Ax is same as definiteness of the matrix A.

3
4 Chapter 1. Solution of General Nonlinear Programming Problem

1. A is positive definite ≡ {xT Ax > 0, ∀x 6= 0} ≡ {s = n} ≡ {λi > 0, ∀i}


A is positive definite ⇒ all principal minors are > 0
Leading principal minors are > 0 ⇒ A is positive definite.

2. A is positive semi-definite ≡ {xT Ax ≥ 0, ∀x ∈ Rn } ≡ {s = r}≡ all principal


minors are ≥ 0 ≡ {λi ≥ 0, ∀i}.

3. A is negative definite ≡ {xT Ax < 0, ∀x 6= 0} ≡ {s = −n} ≡ All principal


minors of even order are > 0 and all principal minors of odd order are < 0. ≡
{λi < 0∀i}

4. A is negative semi-definite ≡ {xT Ax ≤ 0, ∀x ∈ Rn } ≡ {s = −r} ≡ All principal


minors of even order are ≥ 0 and all principal minors of odd order are ≤ 0 ≡
{λi ≥ 0} ≡ {λi ≤ 0}.

5. A is in-definite if neither of above holds ≡ {| s |< r}.

6. A is positive definite iff A is positive semidefinite and nonsingular.

Note 1.2. A twice differentiable function f : Rn → R is said to be strictly convex on a


set S ⊆ Rn iff ∇2x f (x)  0, ∀ x ∈ S and convex iff ∇2x f (x) is positive semidefinite.
f is said to be strictly concave iff ∇2x f (x) ≺ 0, ∀ x ∈ S

Theorem 1.0.1. x is a local minimum of (U P ) iff ∇x f (x) = 0, ∇2x f (x)  0.

1.0.1 Solution of CP

Example 1: Find the maximum volume of a rectangular parallelopiped whose surface


area is at most 10 and at least 6 units units.
Solution of this problem can be found by solving the optimization problem:

max xyz subject to 3 ≤ xy + yz + zx ≤ 5, x, y, z > 0

This is a constrained non linear programming problem.

Example 2: Shortest path problem


5

Find the minimum distance from (1, 2) to the curve x2 + x − y = 1.


Solution of this problem is the solution of the optimization problem :

min (x − 1)2 + (y − 2)2 , subject to x2 + x − y = 1

This is a constrained non linear programming problem.

Example 3: Quadratic Programming Problem:


A general quadratic programming problem is:

1 T
min x Qx s.to Ax = b
2

where x ∈ Rn , Q is a positive definite matrix of order n, A is a matrix of order m × n,


b is a vector of order m. Example 4: Least Mean Square Problem
Consider a system of linear equations Ax = b, A = (aij )m×n is a matrix of order m × n,
Rank(A) = m, x ∈ Rn , b ∈ Rm . That is, find (x1 , x2 , ..., xn ) so that

a11 x1 + a12 x2 + ... + a1j xj + ... + a1n xn = b1

a21 x1 + a22 x2 + ... + a2j xj + ... + a2n xn = b2

....

ai1 x1 + ai2 x2 + ... + aij xj + ... + ain xn = bi

....

am1 x1 + am2 x2 + ... + amj xj + ... + amn xn = bm

Solution of this problem can be found analytically by solving the optimization prob-
lem:

min k Ax − b k22
x∈Rn

This is equivalent to
m X
X n
minn ( aij xj − bi )2
x∈R
i=1 j=1
6 Chapter 1. Solution of General Nonlinear Programming Problem

This is an unconstrained quadratic Programming problem.

General structure of a constrained optimization problem is

min f (x)

subject to g1 (x) ≤ 0

g2 (x) ≤ 0

.....

gm (x) ≤ 0

h1 (x) = 0

h2 (x) = 0

.......

hk (x) = 0

f, gi , hj : Rn → R, i = 1, 2, ..., m; j = 1, 2, ..., k Construct the Lagrange function for


(CP ) with dual vector λ = (λ1 , λ2 , ..., λm )T and µ = (µ1 , µ2 , ..., µk )T , as

L(x, λ, µ) = f (x) + λT g(x) + µT h(x)


Xm k
X
= f (x) + λi gi (x) + µj hj (x) (1.0.1)
i=1 j=1

Pm Pk
∇x L(x, λ, µ) = ∇x f (x) + i=1 λi ∇x gi (x) + j=1 µj ∇x hj (x)

Theorem 1.0.2. x is a local minimum of (CP ) iff f, gi , hj are convex functions at x,


{∇x gi (x), ∇x hj (x)} is linearly independent, and x satisfies KKT(Karush Kuhn Tucker)
1.1. Example 7

optimality conditions, which are:

∇x L(x, λ, µ) = 0

gi (x) ≤ 0, i = 1, 2, ..., m

hj (x) = 0, j = 1, 2, ..., k

λi .gi (x) = 0 ∀ i

λi ≥ 0, µj ∈ R, ∀ i, j, (λ, µ) 6= 0

1.1 Example
Example 1. Write all necessary and sufficient conditions for the existence of a local
optimal solution of the following problem at (1, 1) and verify if these are satisfied or
not.
min x3 y 5 − 3x2 + 2y s.to 3x + 2y 2 ≤ 6, x2 + y ≤ 2, 3x − 2y = 1

Here f (x, y) = x3 y 5 − 3x2 + 2y, g1 (x, y) = 3x + 2y 2 − 6, g2 (x, y) = x2 + y − 2,


h(x, y) = 3x − 2y − 1
Lagrange function is
L(x, y; λ1 , λ2 ; µ) = x3 y 5 −3x2 +2y +λ1 (3x+2y 2 −6)+λ2 (x2 +y −2)+µ(3x−2y −1).
Optimality Conditions:

1. Feasibility condition: (1, 1) satisfies feasibility conditions g1 (x, y) ≤ 0, g2 (x, y) ≤


0, h(x, y) = 0.

2. Convexity condition:

• ∇2 f (1, 1) is not a positive definite matrix so f is not a convex function in


the nbd of (1, 1).

• ∇2 g1 (1, 1) is a positive semidefinite matrix, hence convex.

• ∇2 g2 (1, 1) is a positive semidefinite matrix, hence convex.

• h is a linear function, hence this is a convex function.

Hence this is not a convex programming problem.


8 Chapter 1. Solution of General Nonlinear Programming Problem

3. Dual restriction: λ1 ≥ 0, λ2 ≥ 0, µ ∈ R and (λ1 .λ2 , µ) 6= (0, 0, 0)

4. Complementary conditions: λ1 g1 (1, 1) = 0 means λ1 may not be zero. λ2 g2 (1, 1) =


0 means λ2 = 0.

5. Regularity condition: {∇g1 (1, 1), ∇g2 (1, 1), ∇h(1, 1)} is a linearly dependent
set. So regularity condition is not satisfied.

6. Normal condition: ∇L(1, 1; λ1 , λ2 ; µ) = 0, which is

7 + 3λ1 + λ2 + 3µ = 0

4 − 2λ1 + 3λ2 + µ = 0

−26
Since λ2 = 0 so solution of this system is λ1 = 95 , µ = 9
. Hence dual restriction
is satisfied.

Since some optimality conditions are not satisfied so (1, 1) is not a solution.

1.2 Exercise
1. Consider the following two non linear optimization problems.
(i)Verify both necessary and sufficient optimality conditions at (1,1,1) for (P1 )and
at (1,-1,0 )for (P2 ) respectively.
(ii)Verify if (P3 ) is a convex quadratic programming problem or not.

(P1 ) : M inimize 3x21 − 2x1 x2 x3 + x32 x3

Subject to 3x21 + x2 x3 ≥ 4

2x2 − 3x23 ≤ 6

−3x1 + 2x2 x23 = −1

2x1 − 3x22 + 4x1 x3 = 3


1.2. Exercise 9

(P2 ) : M aximize 3x21 − 2x1 x2 x3 + x32 x3

Subject to 3x21 + x2 x3 ≥ 3

2x2 − 3x23 ≤ 6, x1 ≥ 0

(P3 ) : M inimize x21 + x1 x2 + 6x22 − 2x2 + 8x2

Subject to x1 + 2x2 ≤ 4

2x1 + x2 ≤ 5

x1 , x2 ≥ 0

2. Derive KKT optimality conditions for

M inimize 7x1 − 6x2 + 4x3

Subject to 3x21 + x2 x3 ≥ 4

x21 + 2x2 + 3x23 = 1

x1 + 5x2 − 3x3 = 6

You might also like