Modern Optimization Theory:
Concave Programming
1
1. Preliminaries
We will present below the elements of “modern optimization theory” as formulated by
Kuhn and Tucker, and a number of authors who have followed their general approach.
Modern constrained maximization theory is concerned with the following problem:
9
Maximize f (x) >
>
=
subject to g j (x) 0; for j = 1; 2; :::; m (P)
>
>
;
and x2X
where
– X is a non-empty subset of <n; and
– f; g j (j = 1; 2; :::; m) are functions from X to <:
– Constraint Set:
C = x 2 X : g j (x) 0; for j = 1; 2; :::; m :
2
– A point x^ 2 X is a point of constrained global maximum if x^ solves the problem
(P).
– A point x^ 2 X is a point of constrained local maximum if there exists an open
x) ; such that f (^
ball around x^; B (^ x) f (x) for all x 2 B (^
x) \ C:
– A pair x^; ^ 2 (X + ) is a saddle point if
<m
x; ^ x^; ^ x; ) for all x 2 X and all
(^ 2 <m
+;
where
(x; ) = f (x) + g (x) for all (x; ) 2 (X <m
+) :
- x^; ^ is simultaneously a point of maximum and minimum of (x; ): maximum
with respect to x; and minimum with respect to :
The constraint minimization problem and the corresponding constrained global
minimum and constrained local minimum can be de ned analogously.
3
2. Constrained Global Maxima and Saddle Points
A major part of modern optimization theory is concerned with establishing (under
suitable conditions) an equivalence result between a point of constrained global max-
imum and saddle point.
– We explore this theory in what follows.
Theorem 1:
If x^; ^ 2 (X + ) is a saddle point, then
<m
(i) ^ g (^
x) = 0;
(ii) g (^
x) 0; and
(iii) x^ is a point of constrained global maximum.
– Proof: To be discussed in class.
– Hints:
- For (i) and (ii) use the second inequality in the de nition of a saddle point.
- Then use (i), (ii) and the rst inequality in the saddle point de nition to prove (iii).
4
A converse of Theorem 1 can be proved if
– X is a convex set,
– f; g j (j = 1; 2; :::; m) are concave functions on X; and
– a condition on the constraints, generally known as “Slater's condition” is satis ed.
- Notice that none of these conditions are needed for the validity of Theorem 1.
Slater's Condition:
Given the problem (P), we will say that Slater's condition holds if there exists x 2 X
such that g j (x) > 0; for j = 1; 2; :::; m:
Theorem 2 (Kuhn-Tucker):
Suppose x^ 2 X is a point of constrained global maximum. If X is a convex set, f; g j
(j = 1; 2; :::; m) are concave functions on X; and Slater's condition holds, then there
is ^ 2 <m+ such that
(i) ^ g (^
x) = 0; and
(ii) x^; ^ is a saddle point.
5
Examples: The following examples demonstrate why the assumptions of Theorem
2 are needed for the conclusion to be valid.
#1. Let X = <+; f : X ! < be given by f (x) = x; and g : X ! < be given by
g (x) = x2:
(a) What is the point of constrained global maximum (^
x) for the problem (P) for this
characterization of X; f and g?
(b) Can you nd a ^ 2 <+ such that x^; ^ is a saddle point? Explain clearly.
(c) What goes wrong? Explain clearly.
#2. Let X = <+; f : X ! < be given by f (x) = x2; and g : X ! < be given by
g (x) = 1 x:
(a) What is the point of constrained global maximum (^
x) for the problem (P) for this
characterization of X; f and g?
(b) Can you nd a ^ 2 <+ such that x^; ^ is a saddle point? Explain clearly.
(c) Is the Slater's condition satis ed? What goes wrong? Explain clearly.
6
3. The Kuhn-Tucker Conditions and Saddle Points
The Kuhn-Tucker Conditions:
Let X be an open set in <n; and f; g j (j = 1; 2; :::; m) be continuously differentiable
on X: A pair x^; ^ 2 (X <m + ) satis es the Kuhn-Tucker conditions if
Pm j
@f ^j @g
(i) (^
x) + (^
x) = 0; i = 1; 2; :::; n;
@xi j=1 @xi
(ii) g (^
x) 0; and ^ g (^
x) = 0:
– The condition ^ g (^
x) = 0 is called the `Complementary Slackness' condition. Note
^ g (^
x) = 0 ) ^ 1g 1 (^
x) + :::+ ^ mg m (^
x) = 0;
) ^ 1g 1 (^
x) = 0; :::; ^ mg m (^
x) = 0; since ^ j 0 as ^ 2 <m
+ and g (^
j
x) 0.
- So if g j (^x) > 0; then ^ j = 0: That is, if a constraint is not binding, then the
corresponding multiplier is 0:
x) = 0; then ^ j can be either > 0 or equal to zero.
- But if g j (^
7
A part of modern optimization theory is concerned with establishing the equivalence
(under some suitable conditions) between a saddle point and a point where the Kuhn-
Tucker conditions are satis ed.
Theorem 3:
Let X be an open set in <n; and f; g j (j = 1; 2; :::; m) be continuously differentiable
on X: Suppose a pair x^; ^ 2 (X <m + ) satis es the Kuhn-Tucker conditions. If X
is convex and f; g j (j = 1; 2; :::; m) are concave on X; then
(i) x^; ^ is a saddle point, and
(ii) x^ is a point of constrained global maximum.
– Proof: To be discussed in class.
Theorem 4:
Let X be an open set in <n; and f; g j (j = 1; 2; :::; m) be continuously differentiable
on X: Suppose a pair x^; ^ 2 (X <m + ) is a saddle point. Then x ^; ^ satis es
the Kuhn-Tucker conditions.
– Proof: To be discussed in class.
8
4. Suf cient Conditions for Constrained Global Maximum
and Minimum
Now we have all the ingredients to nd out the suf cient conditions for a constrained
global maximum or minimum involving the Kuhn-Tucker conditions.
#3. State and prove rigorously a theorem that gives the suf cient conditions for a con-
strained global maximum involving the Kuhn-Tucker conditions.
#4. State and prove rigorously a theorem that gives the suf cient conditions for a con-
strained global minimum involving the Kuhn-Tucker conditions.
9
5. Constrained Local and Global Maxima
It is clear that if x^ is a point of constrained global maximum, then x^ is also a point of
constrained local maximum.
– The circumstances under which the converse is true are given by the following
theorem.
Theorem 5:
Let X be a convex set in <n: Let f; g j ( j = 1; 2; :::; m) be concave functions on
X: Suppose x^ is a point of constrained local maximum. Then x^ is also a point of
constrained global maximum.
– Proof: To be discussed in class.
– Hints: Establish rst that since X is a convex set and g j (j = 1; 2; :::; m)'s are
concave functions, the constraint set C is a convex set.
10
6. Necessary Conditions for Constrained Local Maximum
and Minimum
We now establish the useful result (corresponding to the classical Lagrange Theo-
rem) that if x 2 X is a point of constrained local maximum then, under suitable
conditions, there exists 2 <k+ such that (x ; ) satis es the Kuhn-Tucker condi-
tions.
Theorem 6 (Constrained Local Maximum):
Let X be an open set in <n; and f; g j (j = 1; 2; :::; k ) be continuously differentiable
on X: Suppose that x 2 X is a point of constrained local maximum of f subject to
k inequality constraints:
g 1 (x) b1; :::; g k (x) bk :
Without loss of generality, assume that the rst k0 constraints are binding at x and
that the last (k k0) constraints are not binding. Suppose that the following nonde-
generate constraint quali cation is satis ed at x :
11
The rank at x of the following Jacobian matrix of the binding constraints is k0:
0 1 1
1
@g @g
B @x (x ) (x ) C
B 1 @xn C
B .
.. . .. ... C:
B k C
@ @g 0 @g k0 A
(x ) (x )
@x1 @xn
Form the Lagrangian
L (x; ) f (x) 1 g 1 (x) b1 ::: k g k (x) bk :
Then, there exist multipliers ( 1 ; :::; k) such that
@L @L
(a) (x ; ) = 0; :::; (x ; ) = 0;
@x1 @xn
(b) 1 g 1 (x ) b1 = 0; :::; k g k (x ) bk = 0;
(c) 1 0; :::; k 0;
(d) g 1 (x ) b1; :::; g k (x ) bk :
12
– Proof: To be discussed in class (see Section 19.6, pages 480-482, of the textbook).
– Note that the conditions (a) – (d) are the Kuhn-Tucker conditions.
Example:
Consider the following problem:
9
Maximize x =
subject to (1 x)3 y;
;
x 0; y 0:
(a) De ne carefully X; f; and the g j 's and bj 's for this problem.
(b) Draw carefully the constraint set for this problem and nd out (x ; y ) such that
(x ; y ) solves this problem.
(c) Are there j 's (the number of j 's should be in accordance with the number of g j 's)
such that (x ; y ) and the j 's satisfy the Kuhn-Tucker conditions? Explain carefully.
(d) What goes wrong? Explain carefully.
13
Theorem 7 (Mixed Constraints):
Let X be an open set in <n; and f; g j (j = 1; 2; :::; k ) and hi (i = 1; 2; :::; m) be
continuously differentiable on X: Suppose that x 2 X is a point of constrained local
maximum of f subject to k inequality constraints and m equality constraints:
g 1 (x) b1; :::; g k (x) bk ;
h1 (x) = c1; :::; hm (x) = cm:
Without loss of generality, assume that the rst k0 inequality constraints are binding
at x and that the last (k k0) constraints are not binding. Suppose that the following
nondegenerate constraint quali cation is satis ed at x :
14
The rank at x of the Jacobian matrix of the equality constraints and the binding in-
equality constraints
0 1 1
1
@g @g
B @x (x ) (x ) C
B 1 @xn C
B .
.. . .. ... C
B k C
B @g 0 @g k0 C
B (x ) C
B @x (x ) @xn C
B 1 C
B @h1 @h1 C
B C
B (x ) (x ) C
B @x1 @xn C
B ... ... ... C
B C
@ @hm @h m A
(x ) (x )
@x1 @xn
is (k0 + m) :
15
Form the Lagrangian
L (x; ; ) f (x) g 1 (x)
1 b1 ::: g k (x) bk
k
1 h1 (x) c1 ::: m
m [h (x) cm ] :
Then, there exist multipliers ( 1 ; :::; k; 1 ; :::; m) such that
@L @L
(a) (x ; ; ) = 0; :::; (x ; ; ) = 0;
@x1 @xn
(b) 1 g 1 (x ) b1 = 0; :::; k g k (x ) bk = 0;
(c) h1 (x ) = c1; :::; hm (x ) = cm;
(d) 1 0; :::; k 0;
(e) g 1 (x ) b1; :::; g k (x ) bk :
16
Theorem 8 (Constrained Local Minimum):
Let X be an open set in <n; and f; g j (j = 1; 2; :::; k ) and hi (i = 1; 2; :::; m) be
continuously differentiable on X: Suppose that x 2 X is a point of constrained local
minimum of f subject to k inequality constraints and m equality constraints:
g 1 (x) b1; :::; g k (x) bk ;
h1 (x) = c1; :::; hm (x) = cm:
Without loss of generality, assume that the rst k0 inequality constraints are binding
at x and that the last (k k0) constraints are not binding. Suppose that the following
nondegenerate constraint quali cation is satis ed at x :
17
The rank at x of the Jacobian matrix of the equality constraints and the binding in-
equality constraints
0 1
@g 1 @g 1
B @x1 (x ) @x
(x ) C
B .
.. . ..
n
... C
B C
B k0 k C
B @g @g 0 C
B (x ) (x ) C
B @x1 @xn C
B C
B @h1 @h1 C
B (x ) (x ) C
B @x1 @xn C
B ... ... ... C
B C
@ @hm @hm A
(x ) (x )
@x1 @xn
is (k0 + m) :
18
Form the Lagrangian
L (x; ; ) f (x) g 1 (x)
1 b1 ::: g k (x) bk
k
1 h1 (x) c1 ::: m
m [h (x) cm ] :
Then, there exist multipliers ( 1 ; :::; k; 1 ; :::; m) such that
@L @L
(a) (x ; ; ) = 0; :::; (x ; ; ) = 0;
@x1 @xn
(b) 1 g 1 (x ) b1 = 0; :::; k g k (x ) bk = 0;
(c) h1 (x ) = c1; :::; hm (x ) = cm;
(d) 1 0; :::; k 0;
(e) g 1 (x ) b1; :::; g k (x ) bk :
19
7. Suf cient Conditions for Constrained Local Maximum
and Minimum
We use techniques similar to the necessary conditions.
– Given a solution (x ; ; ) of the Kuhn-Tucker conditions (the rst-order condi-
tions), divide the inequality constraints into binding constraints and non-binding
constraints at x :
- On the one hand, we treat the binding inequality constraints like equality constraints;
- on the other hand, the multipliers for the non-binding constraints must be zero
and these constraints drop out of the Lagrangian.
20
Theorem 9:
Let X be an open set in <n; and f; g j (j = 1; 2; :::; k ) and hi (i = 1; 2; :::; m) be
twice continuously differentiable on X: Consider the problem of maximizing f on the
constraint set:
g j (x) bj ; for j = 1; 2; :::; k;
Cg;h x 2 X: i
h (x) = ci; for i = 1; 2; :::; m:
Form the Lagrangian
L (x; ; ) f (x) 1 g 1 (x) b1 ::: g k (x) bk
k
1 h1 (x) c1 ::: m
m [h (x) cm ] :
(a) Suppose that there exist multipliers ( 1 ; :::; k; 1 ; :::; m) such that
@L @L
(x ; ; ) = 0; :::; (x ; ; ) = 0;
@x1 @xn
1 0; :::; k 0;
1 g 1 (x ) b1 = 0; :::; k g k (x ) bk = 0;
h1 (x ) = c1; :::; hm (x ) = cm:
21
(b) Without loss of generality, assume that the rst k0 inequality constraints are binding
at x and that the last (k k0) constraints are not binding. Write g 1; :::; g k0 as gk0 ;
h1; :::; hm as h; the Jacobian derivative of gk0 at x as Dgk0 (x ) ; and the Jacobian
derivative of h at x as Dh (x ) :
Suppose that the Hessian of L with respect to x at (x ; ; ) is negative de nite
on the linear constraint set
fv : Dgk0 (x ) v = 0 and Dh (x ) v = 0g ;
that is,
6 0; Dgk0 (x ) v = 0 and Dh (x ) v = 0
v =
) v T HL (x ; ; ) v < 0:
Then x is a point of constrained local maximum of f on the constraint set Cg;h:
22
To check condition (b), form the bordered Hessian
0 1
@g 1 @g 1
B 0 0 0 0 j
@x1 @xn C
B . . . . . . . . ... C
B .. . . .. .. . . .. j .. .. C
B C
B @g k0
@g k0 C
B 0 0 0 0 j C
B @x1 @xn C
B C
B @h1 @h1 C
B 0 0 0 0 j C
B C
B @x1 @xn C
.
B .. . . . .. . .
.
. .
. . . .. j .
.
. ... ... C
H=B B m m
C
C
B 0 @h @h C
B 0 0 0 j C
B @x1 @xn C
B j C
B 1 C
B @g @g k0
@h 1
@h m
@ 2
L @ 2
L C
B j C
B @x1 @x @x @x @x21 @xnx1 C
B . . 1 1 1 C
B .. . . ... ... . . . ... j ... ... ... C
B 1 C
@ @g @g k0 @h1 @hm @ 2L @ 2L A
j
@xn @xn @xn @xn @x1xn @x2n
23
Check the signs of the last (n (k0 + m)) leading principal minors of H; starting with
the determinant of H itself.
– If H has the same sign as ( 1)n and if these last (n (k0 + m)) leading principal
minors alternate in sign, then condition (b) holds.
We need to make the following changes in the wording of Theorem 9 for an inequality-
constrained minimization problem:
(i) write the inequality constraints as g j (x) bj in the presentation of the constraint
set Cg;h;
(ii) change “negative de nite” and “< 0” in condition (b) to “positive de nite” and “> 0”.
– The bordered Hessian check requires that the last (n (k0 + m)) leading principal
minors of H all have the same sign as ( 1)k0+m :
24
Example 1:
Consider the following constrained maximization problem:
9
Qn
>
Maximize xi >
>
i=1
>
=
Pn
(P)
subject to xi n; >
>
i=1 >
>
and xi 0; i = 1; 2; :::n: ;
Find out the solution to (P) by showing your steps clearly.
Example 2:
Consider the following constrained maximization problem:
9
2 2
Maximize x + x + 4y > =
subject to 2x + 2y 1; > (Q)
;
and x 0; y 0:
Find out the solution to (Q) by showing your steps clearly.
25
References
Must read the following sections from the textbook:
– Section 18.3, 18.4, 18.5, 18.6 (pages 424 – 447): Inequality Constraints, Mixed
Constraints, Constrained Minimization Problems, and Kuhn-Tucker Formulation;
– Section 19.3 (pages 466 – 469): Second-Order Conditions (Inequality Constraints),
– Section 19.6 (pages 480 – 482): Proofs of First Order Conditions.
This material is based on
1. Mangasarian, O. L., Non-Linear Programming, (chapters 5, 7),
2. Takayama, A., Mathematical Economics, (chapter 1),
3. Nikaido, H., Convex Structures and Economic Theory, (chapter 1).