0% found this document useful (0 votes)
2 views25 pages

Atic L5

The document outlines a series of lectures on convex optimization, including topics such as duality, structured singular values, and H∞ design. It references foundational texts and tools used in the field, such as linear matrix inequalities and the cvx programming framework. Key concepts covered include convex sets, functions, optimization problems, and semidefinite programming.

Uploaded by

c.maharajan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views25 pages

Atic L5

The document outlines a series of lectures on convex optimization, including topics such as duality, structured singular values, and H∞ design. It references foundational texts and tools used in the field, such as linear matrix inequalities and the cvx programming framework. Key concepts covered include convex sets, functions, optimization problems, and semidefinite programming.

Uploaded by

c.maharajan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Convex optimization

Upcoming Lectures:!
!
Lecture 5: Convex Optimization (today / PG)"
!
Lecture 6: Duality at KKT Conditions (March 25 / PG)!
!
Lecture 7: Structured Singular Value (April 1 / RS)"
!
Lecture 8: H∞ design (April 8 / PG)"
!
!
!

Robust Control & Convex Optimization 5: !1


References

References

These lecture notes are based to a large extent on those for the Stanford EE 364a class developed by
Stephen Boyd. His original slides can be downloaded from http://www.stanford.edu/class/ee364a/.

LMI formulations of many control problems are given in:"


S. Boyd, L. El Ghaoui, E. Feron & V. Balakrishnan, “Linear Matrix Inequalities in System and Control
Theory,” SIAM, 1994.

The most extensive text on convex optimization (and one of the best written) is:"
S. Boyd & L. Vandenberghe, “Convex Optimization,” Cambridge Univ. Press, 2004.

Robust Control & Convex Optimization 5: !2


Convex optimization

Convex optimization

Problems to be solved:

1. µ (M ) = µ (DM D 1
) inf max (DM D
1
)
D D

2. inf Fl (G(s), K(s)) L


K(s) stabilizing

3. inf Fl (G(s), K(s)) L2


K(s) stabilizing

4. Iterative robust performance design (D, K) iteration

5. Regional pole placement, MPC, . . .

Tools to be used:

1. Linear matrix inequalities (LMIs)

2. S-procedure, linearizing transforms, Schur complements, . . .

3. cvx, yalmip, Robust Control Toolbox

Robust Control & Convex Optimization 5: !3


Convex optimization

Convex sets:

A set is convex if the line segment between every two points is also in the set.

For all x1 , x2 C, { 1 x1 + 2 x2 | 1 + 2 = 1, 1 0, 2 0} C.

Equivalently, y = x1 + (1 )x2 , [0, 1] is in C.

x1 + (1 )x2 , [0, 1]

x2

x1

Robust Control & Convex Optimization 5: !4


Convex optimization

Hyperplanes and halfspaces

A hyperplane is a set of the form: x aT x = b (a = 0).

a Hyperplanes are affine and convex.


a is the normal vector
x0
aT (x x0 ) = 0
(x x0 )

A halfspace is a set of the form: x aT x b (a = 0).

x aT x b
x0

x aT x b
Halfspaces are convex.

Robust Control & Convex Optimization 5: !5


Convex optimization

Polyhedra

The solution set of finitely many linear inequalities and equalities.

{ x | Ax  b, Cx = d }

(A 2 Rm⇥n , C 2 Rp⇥n ,  is a componentwise inequality)

AT1•
AT2•

AT5•

AT3•

AT4•

A polyhedron can be viewed as the intersection of halfspaces and hyperplanes.

Robust Control & Convex Optimization 5: !6


Convex optimization

Convex cones

Cone: A set K is a cone if, for all x 2 K, ✓x 2 K (✓ 0).

A conic combination, x, of two points, x1 and x2 has the form x1

x = 1 x1 + 2 x2 , 1 0, 2 0.

x2
0

A convex cone is a set that contains every conic combination of points in the set.

Notation: Use the notation x 2 K , x ⌫K 0.

Robust Control & Convex Optimization 5: !7


Convex optimization

Norm balls and norm cones

A norm ball with center, xc , and radius, r is given by: {x| x xc r }.

x1
x =1
Note that the “shape” depends on the norm.
8

x =1
2

x 1=1
x2

1.0

0. 8

Norm cone: { (x, t) | x t} 0. 6


t
0. 4

0. 2

0
Norm balls and norm cones are convex. -1.0
-0.5 1.0
0 0. 5
0
x1 0.5 -0.5
x2
1.0 -1.0
The Euclidean norm cone is called a second-order cone.
Robust Control & Convex Optimization 5: !8
Convex optimization

Positive semidefinite cone

Sn : Symmetric n n matrices.

Sn+ = { X Sn | X 0} (positive semidefinite n n matrices)

X Sn+ zT X z 0, for all z.

Sn+ is a convex cone

Sn++ = { X Sn | X 0} (positive definite n n matrices)

Sn+ is also convex with respect to its z 1

appropriately constrained components 0.8

0.6

0.4

Example: 2 2 matrices:
0.2

x y 0
0. 1
y z 0.5 1
0.8

y
0 0.6 x
−0.5 0.4
0.2
−1 0

Robust Control & Convex Optimization 5: !9


Convex optimization

Affine functions

Consider an affine function, f : Rn Rm , ( f (x) = Ax + b, with A Rm n


,b Rm )

The image of a convex set under f is convex.

X Rn (convex) = f (X) = { f (x) | x X } is convex.

Similarly, the inverse image of a convex set is also convex.

Y Rm (convex) = f 1
(Y ) = { x Rn | f (x) Y } is convex.

Examples:

Scaling, rotation, addition, translation, ...

Linear matrix inequalities: { x | x1 A1 + x2 A2 + · · · + xn An B }, Ai , B Sp .

Robust Control & Convex Optimization 5: !10


Convex optimization

Linear fractional transformations

f: Rn Rm ,

Ax + b
f (x) = T , dom f = x cT x + d > 0 .
c x + d

Images (and inverse images) of convex sets are convex under the linear
fractional mapping.

Example:
2z 1
Bilinear (Tustin) transform: s = f (z) = , maps the unit disk to the left half-plane.
T z+1

Imaginary Imaginary

z-plane s-plane

-1 1
Real Real

Robust Control & Convex Optimization 5: !11


Convex optimization

Convex functions

A function, f (x) is convex if:

dom f is a convex set; and

f ( x+(1 )y) f (x)+(1 )f (y), for all x, y dom f and all [0, 1].

f (x) + (1 )f (y) f (y)

f ( x + (1 )y)
f (x)

A function, f (x), is concave if f (x) is convex.

Strict convexity (and strict concavity) are obtained by replacing with <

Robust Control & Convex Optimization 5: !12


Convex optimization

Simple examples:

Convex functions:

Affine functions, f (x) = ax + b for any a, b R.

Exponential functions, f (x) = eax for any a R.

Powers: f (x) = x on R++ for any 1 or 0.

Powers of absolute value: f (x) = |x|p on R for p 1.

Negative entropy: f (x) = x log x on R++ .


1 n
Log-determinant: f (X) = log det(X ), for X 2 S++

Concave functions:

Affine functions, f (x) = ax + b for any a, b R.

Powers: f (x) = x on R++ for any [0, 1].

Logarithm: f (x) = log x on R++ .


n
Log-determinant: f (X) = log det(X), for X 2 S++
Robust Control & Convex Optimization 5: !13
Convex optimization

Examples on Rn :

Convex functions:

Affine functions, f (x) = aT x + b.

n 1/p

Norms: f (x) = x p = |xi |p for p 1; x = max |xi |.


i
i=1

Examples on Rm n
:

Convex functions:

m n
Affine functions, f (X) = tr(AT X) + b = Aij Xij + b.
i=1 j=1

Maximum singular value (spectral norm):

f (X) = X 2 = max (X) = ( max (X


T
X))1/2 .

Robust Control & Convex Optimization 5: !14


Convex optimization

Quasiconvex functions

A function, f : Rn R is quasiconvex of dom f is convex and that sublevel sets,

S = {x dom f | f (x) } are convex for all .

f (x)

f (x) is quasiconcave if f is quasiconvex.

f (x) is quasilinear if it is both quasiconcave and quasiconvex.

Note that quasiconvex functions have no non-global local minima.


Robust Control & Convex Optimization 5: !15
Convex optimization

Optimization problems: a standard form

minimize f0 (x)
subject to fi (x) 0, i = 1, . . . , m
hi (x) = 0, i = 1, . . . , p

x Rn is the optimization variable (vector valued)

f0 : Rn R is the objective or cost function

fi : Rn R, i = 1, . . . , n are the inequality constraint functions

hi : R n R, i = 1, . . . , p are the equality constraint functions

Robust Control & Convex Optimization 5: !16


Convex optimization

Convex optimization problems

minimize f0 (x)
subject to fi (x) 0, i = 1, . . . , m
aTi x = bi , i = 1, . . . , p

The functions, f0 , f1 , . . . , fm , are convex.

The equality constraints are affine.

A problem is quasiconvex if f0 is quasiconvex and f1 , . . . , fm , are convex.

minimize f0 (x)
subject to fi (x) 0, i = 1, . . . , m
Ax = b

The feasible set of a convex (or quasiconvex) optimization problem is convex.

Robust Control & Convex Optimization 5: !17


Convex optimization

Semidefinite program (SDP)

minimize cT x
subject to x1 F1 + x2 F2 + . . . + xn Fn + G 0
Ax = b

where Fi , G Sk

The matrix constraint is called a linear matrix inequality (LMI)

Multiple constraints are trivially combined into a single (larger) constraint,

x1 F1 +x2 F2 +. . .+xn Fn +G 0 and x1 H1 +x2 H2 +. . .+xn Hn +M 0

if and only if

F1 0 F2 0 Fn 0 G 0
x1 + x2 + . . . + xn + 0
0 H1 0 H2 0 Hn 0 M

Robust Control & Convex Optimization 5: !18


cvx

“Disciplined convex programming” cvx

Download cvx from www.stanford.edu/ boyd/cvx/

dx(t)
Example: proving the stability of a system: = A x(t)
dt

stable there exists P = P T 0, AT P + P A 0

there exists P = P T I, AT P + P A I

We can consider P as a matrix variable

cvx_begin sdp
variable P(n,n) symmetric
A’*P + P*A <= -eye(n)
P >= eye(n)
cvx_end

cvx status is a string returning the status of the optimization

Robust Control & Convex Optimization 5: !19


cvx

Another example:

We want to know if the stability of two systems,

dx(t) dx(t)
= A1 x(t) and = A2 x(t)
dt dt

can be proven with a single Lyapunov function, V (s) = x(t)T P x(t)

dx(t)
= A(t) x(t) stable for A(t) = 1 (t) A1 + 2 (t) A2 , i (t) 0
dt

We want to find P = P T 0, such that AT1 P + P A1 0, and AT2 P + P A2 0

Or equivalently P = P T I, such that AT1 P + P A1 I, and AT2 P + P A2 I

cvx_begin sdp
variable P(n,n) symmetric
A1’*P + P*A1 <= -eye(n)
A2’*P + P*A2 <= -eye(n)
P >= eye(n)
cvx_end

Robust Control & Convex Optimization 5: !20


Schur complements

Schur complements

Q(x) = Q(x)T , S(x) and R(x) = R(x)T are affine functions of x

Q(x) S(x)
0 R(x) 0 and Q(x) S(x) R(x) 1
S(x)T 0
T
S(x) R(x)

Example 1:

P (x) 0 and trace S(x)T P (x) 1


S(x) < 1

X S(x)T
trace(X) < 1, 0, (X = X T is a slack variable)
S(x) P (x)

Example 2:

tI AT
A 2 t A A T 2
t I, t 0, 0
A tI

Robust Control & Convex Optimization 5: !21


Convex optimization

Example: matrix norm minimization (Maximum singular value)

1/2
minimize A(x) 2 = (A(x) A(x))
T

where A(x) is an LMI: A(x) = A0 + x1 A1 + x2 A2 + . . . + xn An

The equivalent SDP is:

minimize t

tI A(x)
subject to T
0
A(x) tI

The decision variables are now t and x.

The constraint equivalence follows from a Schur complement argument

A 2 t AT A t2 I, t 0,

tI A
0
AT tI

Robust Control & Convex Optimization 5: !22


S-procedure

S-procedure

Consider a collection of m + 1 quadratic functions,

Fi (x) = xT Ai x + 2bTi x + ci , Ai Sn , i = 0, 1, . . . , m.

We want to test the condition,

F0 (x) < 0, for all x that satisfy Fi (x) 0, i = 1, . . . , m.

The S-procedure gives a sufficient convex condition,

m
If there exists i 0 such that for all x, F0 (x) i Fi (x) 0,
i=1

then F (x) < 0 for all x satisfying Fi (x) 0, i = 1, . . . , m.

Furthermore, if m = 1, and there exists an x such that F1 (x) < 0,

then the S-procedure condition is necessary and sufficient.

Robust Control & Convex Optimization 5: !23


S-procedure

S-procedure

A b
We can show that x Ax + 2b x + c
T T
0 for all x 0
bT c
 m
X 
A0 b0 Ai bi
This gives an LMI test, ⌧i 0
bT0 c0 bTi ci
i=1

Example: find the smallest ellipse containing the union of a set of ellipses.

Suppose we have p ellipses, Ei , given by (x xi )T Ai (x xi ) 1, Ai 0, i = 1, . . . , p.

and we would like to characterize whether or not each of them is contained within an ellipse, E0 ,

x E0 (x x0 )A0 (x x0 ) 1, A0 0.

Note that determining whether or not x is in an ellipse is a quadratic constraint,

x Ei Fi (x) = xT Ai x + 2xTi Ai x + (xTi Ai xi 1) 0.

Robust Control & Convex Optimization 5: !24


S-procedure

S-procedure

Example: find the smallest ellipse containing the union of a set of ellipses.

For each ellipse, Ei E0 for all x such that Fi (x) 0, F0 (x) 0.

By the S-procedure this is equivalent to ,

Ei E0 there exists i 0, such that for all x, F0 (x) i Fi (x) 0

A0 b0 Ai bi
there exists i 0, such that i 0.
bT0 c0 bTi ci

Finding the smallest bounding ellipse:

minimize log det A0 1


subject to A0 0
i 0, i = 1, . . . , p
A0 b0 Ai bi
i 0, i = 1, . . . , p
bT0 c0 bTi ci

The volume of an ellipse is proportional to det A0 1

Robust Control & Convex Optimization 5: !25

You might also like