0% found this document useful (0 votes)
43 views16 pages

Nonsmooth H Synthesis

The document presents a framework for nonsmooth optimization techniques to solve synthesis problems in control systems while avoiding the use of Lyapunov variables, leading to more manageable optimization programs even for large systems. It discusses various design challenges, including static and fixed-order control, and introduces specialized optimization methods that utilize generalized gradients and bundling techniques. The approach is validated through numerical tests and is shown to be effective in practical applications, accommodating both frequency domain and state space parameterizations.

Uploaded by

ethan.clement
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views16 pages

Nonsmooth H Synthesis

The document presents a framework for nonsmooth optimization techniques to solve synthesis problems in control systems while avoiding the use of Lyapunov variables, leading to more manageable optimization programs even for large systems. It discusses various design challenges, including static and fixed-order control, and introduces specialized optimization methods that utilize generalized gradients and bundling techniques. The approach is validated through numerical tests and is shown to be effective in practical applications, accommodating both frequency domain and state space parameterizations.

Uploaded by

ethan.clement
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO.

1, JANUARY 2006 71

Nonsmooth H Synthesis
Pierre Apkarian, Member, IEEE, and Dominikus Noll

Abstract—We develop nonsmooth optimization techniques presence of Lyapunov variables, whose number grows quadrat-
to solve synthesis problems under additional structural ically with the number of states.
constraints on the controller. Our approach avoids the use of Lya- Out present approach does not use the bounded real lemma
punov variables and therefore leads to moderate size optimization
programs even for very large systems. The proposed framework and thereby avoids Lyapunov variables. This leads to moderate
is versatile and can accommodate a number of challenging design size optimization programs even for very large systems. In ex-
problems including static, fixed-order, fixed-structure, decentral- change, our cost functions are nonsmooth and require special
ized control, design of PID controllers and simultaneous design optimization techniques, which we develop here. We evaluate
and stabilization problems. Our algorithmic strategy uses gen- the norm via the Hamiltonian bisection algorithm [10]–[12]
eralized gradients and bundling techniques suited for the
norm and other nonsmooth performance criteria. We compute and exploit it further to compute subgradients, which are then
descent directions by solving quadratic programs and generate used to compute descent steps. Notice, however, that our method
steps via line search. Convergence to a critical point from an is not a pure frequency domain method. In fact, it allows both
arbitrary starting point is proved and numerical tests are included frequency domain and state space domain parameterizations of
to validate our methods. The proposed approach proves to be the unknown controller. This makes it a very flexible tool in a
efficient even for systems with several hundreds of states.
number of situations of practical interest.
Index Terms—Bilinear matrix inequality (BMI), bundle Several iterative methods for reduced-order control have been
methods, Clarke subdifferential, fixed-order synthesis, -syn- proposed over recent years; see, for instance, [13]–[15]. In [13],
thesis, linear matrix inequality (LMI), nonsmooth optimization,
-hard problems, simultaneous stabilization, static output a comparison among four of these methods on a large set of
feedback. test problems is arranged, with the result that successive lin-
earization [14], also known as the Frank and Wolfe (FW) algo-
rithm [16], performed best. Whenever possible, we have there-
I. INTRODUCTION fore compared our new nonsmooth methods and the augmented
Lagrangian algorithm in [17] and [18] with the FW method. The
I N THIS paper, we consider synthesis problems with
additional structural constraints on the controller. This in-
cludes static and reduced-order output feedback control,
results are presented in the experimental section.
As far as comparison with existing methods is concerned,
structured, sparse or decentralized synthesis, simultaneous sta- let us mention that for specific classes of plants, it is possible
bilization problems, multiple performance channels, and much to compute reduced-order controllers without the use of opti-
else. We propose to solve these problems with a nonsmooth op- mization techniques. This has, for instance, been investigated
timization method exploiting the structure of the norm. in [19]–[21]. These approaches usually make strong additional
In nominal synthesis, feedback controllers are computed assumptions like singularity, or hypotheses about unstable in-
via semidefinite programming (SDP) [1] or algebraic Riccati variant zeros. In such cases it may then even be possible to
equations [2]. When structural constraints on the controller are assure global optimality of the computed controllers. Unfortu-
added, the synthesis problem is no longer convex. Some of nately, in these approaches, the order of the controller is not a
the problems above have even been recognized as NP-hard [3] or priori known, and in particular, it is not possible to compute
as rationally undecidable [4]. These mathematical concepts in- static controllers with this type of technique. In the absence of
dicate the inherent difficulty of synthesis under constraints these additional assumptions, and in particular when structural
on the controller. constraints are imposed, synthesis via nonlinear optimization
Even with structural constraints, the bounded real lemma may appears to be the most general approach to synthesis.
still be brought into play. The difference with customary The structure of this paper is as follows. In Section II, we
synthesis is that it no longer produces linear matrix inequali- present the synthesis problem and give several motivating
ties (LMIs), but bilinear matrix inequalities (BMIs), which are examples. Section III computes subgradients of the norm,
genuinely nonconvex. Optimization code for BMI problems is which are then applied to closed-loop scenarios in Section IV.
currently developed by several groups; see, e.g., [5]–[9], but it In Section V, we start to develop our first-order descent
appears that the BMI approach runs into numerical difficulties method,which is completed in Section VI. Section VI-G dis-
even for problems of moderate size. This is mainly due to the cusses practical aspects of the method, and the final Section VII
presents a number of experiments to validate our approach.
Manuscript received August 31, 2004; revised May 18, 2005 and August 9,
2005. Recommended by Associate Editor M. V. Kothare.
NOTATION
M. Apkarian is with the ONERA and Université Paul Sabatier, Toulouse, Let be the space of matrices, equipped with the
France (e-mail: apkarian@[Link]).
M. Noll is with the Université Paul Sabatier, Toulouse, France. corresponding scalar product , where
Digital Object Identifier 10.1109/TAC.2005.860290 is the transconjugate of the matrix its trace. The space
0018-9286/$20.00 © 2006 IEEE
Authorized licensed use limited to: ONERA. Downloaded on February 11,2026 at [Link] UTC from IEEE Xplore. Restrictions apply.
72 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 1, JANUARY 2006

of Hermitian matrices is denoted . For Hermitian of transfer functions, and much else. Formally, the synthesis
or symmetric matrices, means that is positive problem may then be represented as
definite, that is positive semidefinite. For ease of
notations, we define the following sets of Hermitian matrices: minimize
. Consider -tuples subject to stabilizes (1)
of Hermitian matrices , we define the set (3)
. For
short, we shall use or when the dimension needs not be where represents a structural constraint on the controller
specified. (2) like one of the above.
We use the symbol to denote the maximum eigenvalue of Without the restriction , and under standard stabiliz-
a symmetric or Hermitian matrix. We will use general notions ability and detectability conditions, it has become customary to
from nonsmooth analysis covered by [22]. Notions on -subdif- synthesize as follows. After substituting (2) into (1), the
ferentials and -enlarged subdifferentials for spectral functions synthesis problem is transformed into a matrix inequality
and their relationships are discussed at length in [23]–[25]. Un- condition using the bounded real lemma [26]. Then, the pro-
less stated otherwise, the symbol designates a vector gathering jection lemma from [1] is used to eliminate the unknown con-
(controller) decision variables and must not be confused with troller data from the cast, leaving an LMI
the plant state in Section II. In the notation , the subscript problem, which may be solved by SDP. In a third step the con-
refers to the iteration index. troller state-space representation (2) is recovered.
This scenario changes dramatically as soon as constraints
are added. Then, the problem may no longer be trans-
II. SYNTHESIS formed into an LMI or any other convex program, and alterna-
The general setting of the synthesis problem is as fol- tive algorithmic strategies are required. The aim of this paper is
lows. We consider a linear time-invariant plant described in stan- to present and analyze one such alternative.
dard form by the state–space equations Example 1. Pure Stabilization: Often the first important step
in controller synthesis (3) is to find a stabilizing controller .
Already at this stage the norm plays a prominent role, be-
cause of the well-known fact that under stabilizability and de-
(1) tectability, a linear-time invariant system is Lyapunov stable if
and only if its norm is finite [27]. More specifically, under
stabilizability and detectability assumptions, the static control
where is the state vector, the vector of con- law stabilizes the plant
trol inputs, the vector of exogenous inputs,
the vector of measurements and the controlled or (4)
performance vector. Without loss of generality, it is assumed
throughout that .
if and only if the closed-loop transfer matrix
Let be a dynamic output feedback control law for
has finite norm.
the open-loop plant (1), and let denote the closed-
In order to construct a static stabilizing controller for an un-
loop transfer function of the performance channel mapping
stable open-loop system (4), the following procedure appears
into . Our aim is to compute such that the following de-
fairly natural. Suppose we are given an initial guess , which
sign requirements are met.
leaves the closed-loop system unstable. Then, we pick
• Internal stability: For , the state vector of the such that the -shifted -norm of the closed-loop system is
closed-loop system (1) and (2) tends to zero as time finite
goes to infinity.
• Performance: The norm is mini-
mized among all stabilizing .
We assume that the controller has the following frequency where the shifted norm is given in [28]. The problem of
domain representation: finding a stabilizing controller may now be addressed by an
optimization program
minimize
(2) (5)

where is the order of the controller, and where the case where the shift is either kept fixed at the initial , or is
of a static controller is included. Often practical gradually decreased after each minimization step to accelerate
considerations dictate additional challenging structural con- the procedure. A stabilizing controller is obviously obtained
straints. For instance it may be desired to design low-order when the shift reaches , but very often this happens already
controllers or controllers with prescribed-pattern, with the initial value , so that shifting is not even necessary
sparse controllers, decentralized controllers, observed-based as a rule. Numerical tests for this method will be presented in
controllers, PID control structures, synthesis on a finite set Section VII.
Authorized licensed use limited to: ONERA. Downloaded on February 11,2026 at [Link] UTC from IEEE Xplore. Restrictions apply.
APKARIAN AND NOLL: NONSMOOTH SYNTHESIS 73

While we stop the optimization (5) as soon as a stabilizing cast as minimizing a finite family of closed-loop transfer func-
is reached, it may happen that for a fixed shift , the method tions. Formally, given the open-loop plants, ,
converges to a local minimum of (5), which fails to stabilize we consider the problem
the closed-loop system. This is explained by the fact that (5),
just like all the other methods in this paper, are local optimiza-
minimize
tion methods in the sense that they guarantee convergence to a
local minimum (or a critical point). If an unsatisfactory local
minimum is reached, the only possibility is to do a restart with
a new initial guess , or switch to another method. Such a where are appropriate weights, and the are chosen
local convergence certificate may appear weak at first sight, but so that the initial guess renders the th system stable after
experience shows that local methods perform much better than shifting by . A lower bound for is, therefore, the spectral
global optimization techniques. Those may have stronger cer- abscissa (7) of the th system. As before, the shifts are decreased
tificates, but run into numerical problems even for small prob- after each norm minimization step and a simultaneously
lems. Indeed, our present approach is almost always successful stabilizing is obtained e.g., if for all .
even without restart. A similar comment applies to the pure sta- Even when some , the solution may produce a simulta-
bilization method in [29]. neously stabilizing .
Notice that pure stabilization is just a special case of the more Example 4. System Reduction: A technique of considerable
general synthesis problem in Section II when we specialize importance is system reduction. It is used by practitioners when-
the standard form to ever an open-loop system of large order is difficult to
control. Let denote such a large size open-loop plant, and
suppose a decomposition into an
unstable and a stable part is available. Then, we may consider
(6) the problem

minimize
Example 2. Spectral Abscissa: In [30] and [31], Burke et al. (8)
present an alternative approach to computing static stabilizing
controllers . The authors propose to solve the nonsmooth op- where ranges over a prespecified class of stable
timization program system of reduced order, , and where some norm cri-
terion is used to evaluate the mismatch between nominal and
reduced systems. If represents the Hankel norm, an explicit
minimize
(7) expression for is available [33]. But it may be prefer-
able to use other criteria like the norm, a problem which
then falls within the class of problems considered in this work.
where is the spectral abscissa of a matrix defined Once a solution to (8) is obtained, the new system
as , while easier to control, may
be expected to have characteristics similar to those of the orig-
inal system.
eigenvalue of

Unfortunately, this function is not even locally Lipschitz, III. SUBDIFFERENTIAL OF THE NORM
which renders application of existing nonsmooth algorithms In this section, we start characterizing the subdifferen-
impossible. The authors have therefore developed a proba- tial of the norm, and derive expressions for the Clarke
bilistic algorithm which allows to treat problems like (7); see subdifferential of several nonconvex composite functions
[31] and [32]. Since optimality tests and approximate subgra- , where is a smooth operator defined on
dients for are difficult to compute (see [29]), we prefer the some with values in the space of stable matrix transfer
use of (5) over (7). Numerical tests for (5) are presented in functions .
Section VII and show that this method is successful as a rule.
Consider the -norm of a nonzero transfer matrix function
Our own numerical tests with program (7) are published in
[29].
The previous scenario covers the case of a static stabilizing
controller , but it is clear that stabilization problems including
structural constraints can be treated in exactly the same
way. Several examples of classes will be presented in the where is stable and is the maximum singular value of
sequel. . Suppose is attained at some frequency
Example 3. Simultaneous Stabilization: Another instance of , where the case is allowed. Let be
interest is the simultaneous stabilization problem, which can be a singular value decomposition. Pick the first column of
Authorized licensed use limited to: ONERA. Downloaded on February 11,2026 at [Link] UTC from IEEE Xplore. Restrictions apply.
74 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 1, JANUARY 2006

the first column of , that is, . Then, the where the state–space data of , and are given
linear functional defined as in (1) and the dependence on is omitted for brevity. Our aim
is to compute the subdifferential of
at . We first notice that the derivative of at
is

is continuous on the space of stable transfer functions where is an element of the same matrix space as and with
and is a subgradient of at [28]. More generally, the definitions
assume that the columns of form an orthonormal basis of
the eigenspace of associated with the largest
eigenvalue , and that the
columns of form an orthonormal basis of the eigenspace of
associated with the same eigenvalue. Then for
all complex Hermitian matrices ,

(9)

is a subgradient of at . Finally, with rational and and the closed-loop state–space data
assuming that there exist finitely many frequencies
where the supremum is attained, all sub-
gradients of at are precisely of the form

Note that numerical stability requires that transfer functions


and be computed through state–space realiza-
tions.
where the columns of form an orthonormal basis of the Now, let be a subgradient of at of the
eigenspace of associated with the leading form (9), specified by and with attained
eigenvalue , and where . See [22, at frequency . According to the chain rule, the subgradients
Prop. 2.3.12, Th. 2.8.2] and [29] for this. of at are of the form ,
Suppose now we have a smooth operator , mapping into where the adjoint acts on through
the space of stable transfer functions . Then the com-
posite function is Clarke subdifferentiable at
with

(10)

where is the subdifferential of the -norm obtained


above, and where is the adjoint of , mapping the
dual of into , where is identified with its dual here.
In the sequel, we will compute this adjoint for special
classes of closed-loop transfer functions. Suitable chain rules (11)
covering this case are for instance given in [22, Sec. 2.3].
In consequence, for a static , the Clarke subdifferential of
IV. CLARKE SUBDIFFERENTIALS IN CLOSED-LOOP at consists of all subgradients
Given a stabilizing controller and a plant with the usual of the form
partition

(12)

the closed-loop transfer function is obtained as where . Recall that is now an element of the same ma-
trix space as and acts on test vectors through
.
Authorized licensed use limited to: ONERA. Downloaded on February 11,2026 at [Link] UTC from IEEE Xplore. Restrictions apply.
APKARIAN AND NOLL: NONSMOOTH SYNTHESIS 75

This formula is easily adapted if the norm is attained at the subgradients with respect to at
a finite number of frequencies . In this more general are obtained through
situation, subgradients of at are of the form

(13)

where .
At this stage, it is important to stress that expressions where as before .
(11)–(13) are general and can accommodate any problem The previous approach could be generalized by making an
discussed in previous sections. Later, we resume and expand additional design parameter. Then a constraint should
this list by considering more examples of practical interest. be added. Notice also that the above formula readily extends to
Example 5. Dynamic Controllers: Assume now that the con- arbitrary basis functions
troller is dynamic as in (2). The subgradient set is again obtained
via formula (13) by performing the substitutions

where the ’s are the design variables.


Example 8. Matrix Fraction Representations: An alternative
representation of controllers is via matrix fraction descriptions.
For instance, the left matrix fraction representation is given as
(14)

The entire Clarke subdifferential is then described by the set


with
of subgradients in

where and is derived through


formulas (12) or (13).
Example 6. Structured Controllers: In practice, it is some- Now, and are the design variables. As before, it is imme-
times required that some entries in the controller gain be put diate to show that partial subgradients with respect to and
to zero, while the others may be freely assigned. This is the are given as
case in decentralized control, where the controller must enjoy
a block-diagonal structure. Consider a pattern matrix with
entries , where means that the con-
troller gain must be zero, whereas means
that can be freely assigned. The Clarke subdifferential of
at is then of the form

where is as in (13) and where denotes


the entry-wise Hadamard or Schur product [34].
Example 7. PID Controllers: PID control is one of the most
classical approaches in control system design. The controller is respectively.
generally written as Example 9. Multiple Performance Channels: Practical spec-
ifications often impose that several closed-loop channels
be minimized simultaneously. One way to address mul-
tiobjective optimization of this type is to solve a program of the
form
where and are matrix static gains to be computed, minimize
and is a small positive scalar. Using the general formula (11),
Authorized licensed use limited to: ONERA. Downloaded on February 11,2026 at [Link] UTC from IEEE Xplore. Restrictions apply.
76 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 1, JANUARY 2006

where is the th performance specification to be opti- where encodes the constraint . The
mized. Since the maximum of a finite number of maximum direction of steepest descent at is then obtained as
eigenvalue functions is itself a maximum eigenvalue function , where is solution of (16) with . This
of a block diagonal operator , the suggests the following.
Clarke subgradients could be obtained directly from (13). When Steepest Descent Method for the -Norm:
the usual max formula is used, the result is the same, i.e., sub- 1) If stop. Otherwise:
gradients are of the form 2) Solve (16) and compute the direction of steepest de-
scent at .
3) Perform a line search and find a descent step
.
where are the which are active at 4) Replace by and go back to step 1).
and as specified The drawback of this approach is that it may fail to con-
in (13). verge due to the nonsmoothness of . We believe that a descent
Before going further, it is worth mentioning that our method- method should at least give the weak convergence certificate that
ology carries over to a wide range of controller structures of accumulation points of the sequence of iterates are critical. This
practical interest. This is in particular the case when the struc- is not guaranteed by the above scheme. The reason is that the
tural constraint is of the form , steepest descent direction at does not depend continuously on
where is a suitable differentiable parametrization of the . This is why modifications of the steepest descent scheme are
class . This includes for instance observed-based controllers, discussed in the next section.
feed-forward compensators, controllers defined through Youla Remark. Spectral Abscissa Versus Norm: Notice that
parameterizations and much else. the tangent program (16) is convenient because it leads to rela-
tively small size SDPs. Indeed, the matrices ,
V. STEEPEST DESCENT METHOD are of the size of the multiplicity of , and our ex-
periments indicate that dim in (16) rarely exceeds 30. The
Nonsmooth techniques have been used before in algorithms situation is very different for the spectral abscissa (7). In [29],
for controller synthesis. For instance, E. Polak and co-workers we have derived a tangent program for (7). The difficulty is that
have proposed a variety of techniques suited for eigenvalue or Lyapunov variables re-enter the scene. Indeed, is a local min-
singular-value optimization and for extensions to the semi-infi- imum of the composite function like in (7) with value
nite case, covering in particular the -norm (see [35], [36], if and only if is a local minimum
and the citations given there). Another reference is [28], where of the optimization program
the authors exploit the Youla parameterization via convex non-
differentiable analysis to derive the cutting plane and ellipsoid minimize
algorithms. subject to
Let us consider the problem of minimizing ,
where regroups the controller data, referred to as in the pre-
vious section, and where maps smoothly into the space for some small fixed . An optimality test for
of stable transfer functions. We write or is therefore derived from an optimality test for program , as
when the complex argument of needs to be speci- shown in [29]. This leads to a SDP with unknown variable of
fied. dim , which may be prohibitively large. This is one of
A necessary condition for optimality is the reasons why our present approach privileges the use of the
. It is therefore reasonable to consider the norm (5) over (7).
program
VI. FIRST-ORDER DESCENT METHOD
In this section, we devise a first-order algorithm for composite
(15) functions of the norm. Along with we
consider the symmetrized operators
which either shows , or produces the direction of respectively . We represent as
steepest descent at if , and where the are as
in (13). If we vectorize , then (17)
we may represent by a matrix vector product, ,
and solve the optimization program
with a suitable matrix . Program (15) is then equivalent to the
following SDP:

minimize Notice that for fixed is a smooth op-


subject to erator into the space of Hermitian matrices, , while
is the maximum eigenvalue function. Similar
techniques could be applied to broader classes with a structure
(16) like (17). Deriving the method will require three steps, which
Authorized licensed use limited to: ONERA. Downloaded on February 11,2026 at [Link] UTC from IEEE Xplore. Restrictions apply.
APKARIAN AND NOLL: NONSMOOTH SYNTHESIS 77

we regroup into subsections. We start with the important spe- and, similarly
cial case , where maps smoothly into .

A. Preparation
Function (17) is subject to two sources of nonsmoothness. Here, is the adjoint of the linear operator . Fi-
The nonsmooth character of the maximum eigenvalue function, nally, going one step further, this approach allows us to consider
and the nonsmoothness introduced by the operator , which and , which are applied to the variable .
in the case of is even infinite. Each individual function
will be analytic at if the multiplicity of is B. Descent Step Generator
one, but nonsmoothness needs to be taken into account as soon In this section, we discuss a very simple mechanism which
as eigenvalues coalesce. generates descent steps in such a way that the following weak
From a practical point of view it is reasonable to make the form of convergence can be guaranteed: every accumulation
following additional hypothesis. point of the sequence of iterates is a stationary point.
H): The maximum is always attained on a finite set of Let be a locally Lipschitz function, and let
frequencies. This set is denoted by and may contain denote its Clarke subdifferential [22]. Suppose we can exhibit a
. mechanism , the descent step generator, such that
Assumption H) is, for instance, satisfied when the multiplicity the following rules are satisfied.
of is 1, as is then analytic in typical control
i) Whenever , then .
applications.
ii) When , then there exists a neighborhood
Let us introduce some more notation. For
of and some such that
, we define as
for every .
While i) simply means that is a descent step away from ,
we can interpret ii) as some weak form of continuity of . In-
deed, when the mapping describing descent is continuous,
Notice that as soon as . Next recall the
then i) implies ii) without further work. Axiom ii) is weaker than
definition of the -subdifferential of the maximum eigenvalue
asking to be continuous. Clearly, ii) always implies i).
function [23]
Example: In order to understand the idea behind axiom ii),
consider a -function and let denote the steepest de-
scent step at , obtained by an Armijo line search. If we define
which is an important analytical tool in nonsmooth analysis. formally , where is the smallest step
Since is difficult to compute, we follow Cullum et al. satisfying an Armijo rule, then turns out continuous, but it
[23] and Oustry [25] and introduce a modification of is clear that in practice we would accept any step satisfying
, called the -enlarged subdifferential for the maximum the Armijo condition, without insisting on a continuous depen-
eigenvalue function. For and let the dence of on . In that case will not be continuous, but
index such that property will still hold true.
Clearly the situation we have in mind is when is nonsmooth,
so that a steepest descent step in tandem with Armijo search
would typically fail even when was continuous. This is
The index is also called the -multiplicity of . explained by the fact that under nonsmoothness, defining
Let be a -matrix whose columns form an or- along the lines above would miss axiom ii), because the steepest
thonormal basis of the invariant subspace of associated with descent direction behaves very discontinuously. Indeed,
the first eigenvalues. Then we define examples where this happens are easily produced.
Proposition 1: Suppose the descent step generator for
satisfies axioms i) and ii). Let be a sequence of iterates
defined as . Then every accumulation point of
By construction , so is a critical point, that is, satisfies .
is an enlargement of and an inner approximation of Proof: Let be an infinite sequence such that
the -subdifferential (see also [25]). The gap associated with the . Then by monotonicity, . That
choice is . If is means
the multiplicity of , then choosing small enough gives
. In this case, we have .
The following is an important step toward the analysis of (17).
Consider a differentiable mapping . We extend Now use axiom ii) at the limit point . There exist such
and the enlarged subdifferential to the com- that for all . Since
posite function by setting for large enough, and since , we
should have for large enough,
a contradiction.
Authorized licensed use limited to: ONERA. Downloaded on February 11,2026 at [Link] UTC from IEEE Xplore. Restrictions apply.
78 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 1, JANUARY 2006

C. Eigenvalue Optimization In the nonconvex case, the line search procedure locating
How can we define a descent step generator with prop- is more complicated than for , but finite termina-
erties i) and ii) for a maximum eigenvalue function tion is still guaranteed (cf. [37, Sec. 3.7]). The constant
? Suppose we are at a point where . Let may be computed via the formulas (15), (19), and (20) of that
the eigenvalues of be arranged into groups reference.
Following the lead of [37], it is now clear how to obtain our
step . Choose so that the maximum (18) is attained and
take . Then, by Lemma 1, gives a guaranteed
where and where are the group leaders. Consequently, decrease of
eigenvalue gaps occur between and . Let be an or-
thonormal bases of the eigenspace associated with the first block
an orthonormal basis containing
associated with the first two blocks , and
so on. At , we compute the quantities where the constant depends continuously on as
argued previously. What remains to be checked is the following.
Lemma 2: This choice of and guarantees prop-
erty ii).
Proof: Consider a sequence . We need to compare
and keep those where . Notice that to . Now observe that due to the continuity of the
, where cuts into the th eigenvalue functions , every eigenvalue gap at is also
gap, that is . Put dif- an eigenvalue gap at as soon as is sufficiently close to
ferently, the -multiplicity of is . Moreover, . On the other hand, may (and will) have many more gaps
and than . But notice that the maximum (18) is over all eigenvalue
eventually. We compute the quantity gaps. So each gap in will occur in the computation of
. More precisely, it will be approximated by some of the
(18) gaps considered in . Put differently, for the th eigenvalue
gap of we have , where
Now, we use the following. is the index of the eigenvalue gap of corresponding to the
Lemma 1: Let and . Let be the leader of th gap of . Here, and
a group of eigenvalues of such that rely on the fact that gaps at remain gaps at . Since
. Let be the direction of steepest calls for all the up to the th gap at , and since
enlarged descent, that is with these converge to the corresponding basis regrouping the gap
, the result follows.
That means , so we can assume
where the columns of are an orthonormal basis of the in- from some index on. This in turn im-
variant subspace of associated with the eigenvalues up to plies
. Then there exists a descent step away from in
direction , which decreases the value of by at least

as soon as is close enough to , proving property


Here, depends on (ii).
and in particular continuously on . The line search required to The method outlined in this section gives a convergence cer-
compute this step is finite. tificate because all eigenvalue gaps are included in the com-
Proof: In the case of an affine operator , putation of (18). This may seem inconvenient for very large
Oustry [25, Th. 5], shows that may be decreased by size matrices . If we decide to truncate and consider only
the quantity some of the eigenvalue gaps among the largest eigenvalues, the
theoretical convergence properties of the method are weaker,
even though convergence may still be guaranteed e.g., when
is convex (see [37]). Notice however that even for large
where is the linear part of . Here is even the quantity in (18) may be computed fairly reliably by
independent of . Moreover, is computed by a line considering for some of the first only. As
search which terminates after a finite number of steps, see [25, rather quickly, the higher as a rule
Sec. 3.2]. do not contribute to the computation of . Also notice that
In [37] and [38], this result is generalized to nonconvex max- since the sequence is monotone, computing may
imum eigenvalue functions , with a constant often be avoided, for instance when
now depending on the Lipschitz constant of on a bounded re- or when the current best value is lo-
gion around , like for instance with some fixed . cated at index .
Authorized licensed use limited to: ONERA. Downloaded on February 11,2026 at [Link] UTC from IEEE Xplore. Restrictions apply.
APKARIAN AND NOLL: NONSMOOTH SYNTHESIS 79

D. Descent by a Local Model Corollary 1: The infimum in (19) is attained at


In this section, we present an alternative way to obtain a de-
scent step generator for the maximum eigenvalue function (21)
. We start by constructing an intermediate
function , which serves as an optimality test, and second, where is the solution of the dual program (20). As soon
will allow us to quantify decrease. In this section our method as is a direction of descent of at .
follows the line of [36, Th. 2.1.6]. In order to construct our descent step generator, we need to
Let . Let establish two additional properties of . First, we need to show
be the eigenvalues of without repetitions. That means that decrease at may be quantified with the help of . Second,
in our old terminology, where we agree that there are we have to establish continuity of .
distinct eigenvalues. For some fixed we define the Lemma 4: The function is continuous.
criticality measure Proof: Notice that by the dual formula (20), is the
supremum of an infinite family of functions of the form
for
, where , and where
depends continuously on .
(19)
Notice that . This
shows that is the supremum of a family of continuous
Here, has columns which form an orthonormal functions, indexed by . is therefore lower semicontin-
basis of the invariant subspace of associated with the uous. It remains to prove that is also upper semicontinuous.
first eigenvalues. It is immediately clear that Let such that . We have to show .
, because putting gives the upper bound We use the representation (20). Let and choose
. and such that
Lemma 3: We have if and only if .
Proof: The easiest way to see this is to swap max and
min in (19). This requires that we first replace the inner double
supremum in (19) by a double supremum over the convex hull
of , a manoeuvre which where . Passing to a subsequence if
does not change the value . Then we use Fenchel duality necessary, we may assume that each has exactly eigenvalue
to interchange the inner (double) supremum and the outer in- gaps, which remain in the same places . That is
fimum, which goes again without changing the value. The now . Passing to yet another subsequence, we
inner infimum is unconstrained and may be computed explicitly. may assume and , where the
For fixed and convex coefficients it is attained at limiting elements are all of the same types and dimensions as
. Substituting this back into (19) the elements at stage .
leaves the dual expression However, are no longer the distinct eigenvalues
of , because some of the distinct may
coalesce in the limit . Suppose for instance that
, so that the blocks
coalesce in the limit, forming a new larger
eigenvalue block of
(20)

which is represented by a certain . Suppose there are


which the reader recognizes as a semidefinite program. Since
block leaders at . For each of these , we define
for , equality is only possible
when and, hence

then

However, the quantity on the right-hand side is only zero when


, i.e., when . The latter follows
readily from the representation:
where . This represents the limiting linear term in
(20) in a new form suited for the dual representation of .
We next have to treat the norm square term arising in (20)
of the subdifferential . in much the same way. Notice first that the
As a byproduct of the proof via duality we have the following. together form a nested sequence of basis vectors adding up to
Authorized licensed use limited to: ONERA. Downloaded on February 11,2026 at [Link] UTC from IEEE Xplore. Restrictions apply.
80 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 1, JANUARY 2006

an orthonormal basis of eigenvectors of . Passing to the limit for all and every
gives an orthonormal basis of eigenvectors of . We . This proves the claim.
regroup it according to the eigenvalue gaps of and rename the In order to construct , we follow [36, 3c, p. 223] and
corresponding parts . All that remains to do define , where
is to rewrite the limit in the form
for certain . This is done by writing

The supremum is over a nonempty set because of (22), hence


. Let be the integer where the supremum
whenever have to be regrouped in the same , and where is attained. Let us check property ii) with as in
is the last among the old indices subsumed into the new index Lemma 6 and . Let .
, so that . Then Since satisfies (22), we have
by Lemma 6. Therefore, is admitted
for the supremum , which implies . Therefore

, when
is as required, because , hence , while we assume that is chosen sufficiently small to assure
is clear. for every . This proves ii) for the previous
The argument shows that choice of . Other choices of , for instance, [39] are pos-
, which is to say that sible.
is now of the form required to be admitted to the supremum (20)
defining . In other words, , and this is what we had E. Semi-Infinite Case
to prove. Our last step is now to address the semi-infinite case. We are
Notice that [36, Th. 2.1.6 (e)] is obtained as a special case of in the situation (17). Suppose that for finite
Lemma 4 if the operator is specialized to a diagonal ma- we already dispose of a descent step generator for , sat-
trix. The extension to multiple eigenvalues is possible because isfying axioms i) and ii). This is naturally the case when the
all the eigenvalues are taken into account simultaneously. For are maximum eigenvalue functions, because a finite
large matrices, this may again seem inconvenient since it will maximum of maximum eigenvalue functions is itself a max-
lead to large SDPs in (20). imum eigenvalue function. So here we obtain as in Sec-
Lemma 5: The mapping defined by (21) is continuous. tions VI-C or D. Suppose now that we can specify a sequence
The proof follows along the lines of the previous Lemma and of finite sets such that the following conditions
is therefore omitted. are satisfied:
Let us now see how may be used to quantify descent of iii) if , then ;
at . Assume , so that . Using
iv) for every and ;
the directional derivative of at in direction , we obtain

Notice that both axioms guarantee that the approximation


improves with growing . Axiom iii) tells that as soon as
, descent steps may be eventually generated by using ap-
which follows readily from the primal formula (19) for if we proximations of . Axiom iv) is simply saying that approx-
use the fact that . In consequence we have imations get better as increases, and that this happens uni-
the following. formly on small neighborhoods of each .
Lemma 6: Let be fixed. There exists such Using these axioms, let us construct a descent step generator
that for every for . We proceed as follows.
and all sufficiently small. Semi-Infinite Descent Step Generator:
Proof: By the definition of the directional derivative and 1) If , then and return. Otherwise,
the fact that , we have put counter and continue.
2) At counter , if , then increase until
.
3) At counter with , compute the descent
step for at . Let be such that
(22) for every
, as guaranteed by axiom ii) for .
for some and all . Since and are 4) Compute
continuous, we can find a neighborhood of such that .
Authorized licensed use limited to: ONERA. Downloaded on February 11,2026 at [Link] UTC from IEEE Xplore. Restrictions apply.
APKARIAN AND NOLL: NONSMOOTH SYNTHESIS 81

5) If , then let and stop. Other- First-Order Algorithm for the Norm: Variant II:
wise increase by one and go back to step 3). 0) Fix and and choose initial point
We have to make sure that this scheme is well-defined and in- .
troduces a descent step generator for the infinite maximum 1) Given choose a finite set containing .
function . 2) Compute the value and the solution of the
Lemma 7: The descent step generator for SDP (19). If , then stop because .
is well-defined and satisfies axioms i) and ii). Otherwise compute the descent direction for at
Moreover, each of the above loops ends after a finite number of according to (21).
iterations. 3) Using a line search, find a step such that the pre-
Proof: Notice first that if , then descent around dicted decrease satisfies
is possible. Since as , we can also . Compare with the actual de-
decrease around for sufficiently large. So step 2) ends crease . If ,
with a descent step of at after a finite number of trials. accept , put and go to step 5).
Moreover, this remains so for the following counters , because 4) If then reject and add nodes to to
. obtain the finer mesh . Increase counter by one
Next, observe that by axiom , while and go back to step 2).
by axiom iii). That means for 5) Increase counter by one and go back to step 1).
sufficiently large, i.e., the procedure ends in step 4) after a
finite number of updates . It remains to check that G. Practical Aspects
is as required. In this section, we comment on the salient features of the
Suppose for every nonsmooth first-order descent algorithm and address some of
. Then, the practical aspects.
, hence In our testing, we have observed that often the leading eigen-
. This proves axiom ii). values have multiplicity 1 at all frequencies
The procedure is sufficiently flexible to accommodate a . In this situation step 2) of the descent algorithm
problem oriented step generation. We may adapt the choice of variant II could be simplified. Suppose at the current iterate
to the structure of , and what is more important, to the we have selected a finite set of frequencies
local behavior of around the current . containing . Suppose all have multi-
plicity 1. Let , where
F. First-Order Algorithm for the Norm is the normalized eigenvector associated with .
Then, the semidefinite program (16) respectively (20) simplifies
We are now ready to present our algorithm, which follows the
to a convex quadratic program
lines of the previous sections. We start with a version based on
Sections VI-C and D.
First-Order Algorithm for the Norm: Variant I:
0) Fix and choose initial point .
1) Given choose a finite set containing .
and the associated direction of descent is
2) Compute for according to (18). If
then stop, because . Oth-
erwise
3) Use a line search to find a step such that the predicted
decrease satisfies
where is the optimal solution of the quadratic program. Ob-
. Compare with the actual decrease
serve that for coincides with the steepest
. If , accept ,
descent direction for a finite max function.
put and go to step 5).
Let us now specify in which way we select the frequency set
4) If then reject and add nodes to to
at each step. The finite set of frequencies where the
obtain the finer mesh . Increase counter by one
norm is attained is computed via the Hamiltonian technique
and go back to step 2).
[10]. We then form an enriched set of frequencies by adding
5) Increase counter by one and go back to step 1).
to the peak frequencies a collection of logarithmically spaced
In steps 3) and 4) of the algorithm we recognize the mecha- frequencies such that
nism of the previous section, which obtains a descent step gen-
erator for the semi-infinite function by using those of the finite
models . We accept the step for if it exceeds a small frac-
tion of the descent predicted by the finite model . This is where is a user-specified tolerance. We usually limit the set
essentially the same procedure as in Section VI-E. to the first 50 frequencies with largest singular values, as this
Next comes the version based on Section VI-D in tandem with appears to work well on a broad range of numerical tests. Typ-
Section VI-E. The semi-infinite case is handled in exactly the ical values for range from 0.05 to 0.5. The algorithm requires
same fashion, but the descent step generators are different. that this set be iteratively refined when descent steps cannot be
Authorized licensed use limited to: ONERA. Downloaded on February 11,2026 at [Link] UTC from IEEE Xplore. Restrictions apply.
82 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 1, JANUARY 2006

computed, but in practice, our choice is usually satisfactory, and TABLE I


numerical problems due to exceedingly fine can be avoided. STATIC OUTPUT-FEEDBACK STABILIZATION

VII. NUMERICAL EXPERIMENTS


In this section, we test our nonsmooth algorithms on a va-
riety of synthesis problems from the collection by
F. Leibfritz [40]. Computations were performed on a (low-level)
SUN-Blade Sparc with 256 RAM and a 650 MHz Sparc v9 pro-
cessor. LMI-related computations for search directions used the
LMI Control Toolbox [41] or our home made SDP code [6]
while QP computations are based on Schittkowski’s code [42]. experiments where (7) was used to find stabilizing controllers
Our algorithm is a first-order method. Not surprisingly, it may are reported in [29].
be slow in the neighborhood of a local solution. We have im- Table I displays results obtained for static stabilization prob-
plemented various stopping criteria to ensure that an adequate lems borrowed from the literature. The triple gives the
approximation of a solution has been found and to avoid unwar- number of states, inputs and outputs. Column ‘iter’ corresponds
ranted computational efforts as is often the case with a first-order to the number of iterations required to meet the combined stop-
algorithm. The first of these termination criteria is an absolute ping tests. Column displays the final closed-loop spectral ab-
stopping test, which provides a criticality assessment scissa (7). Negative indicates a stable closed-loop system.
Column “cpu” gives the cputime in seconds. Note that the re-
(23) ported cputimes are only indicative as the -version of our code
includes a number of extra tests and even graphics that have been
This is reasonable, as indicates a critical point. It exploited for fine tuning of the various algorithm parameters.
is also mandatory to use relative stopping criteria to reduce the The initial shift for the norm minimization was chosen
dependence on the problem scaling. The test according to the following rule:

% (26)
(24)
with a zero initial static controller .
compares the progress achieved relatively to the current From the table, we observe that in all cases but one, the orig-
performance, while inal choice of the shift was successful. As indicated, there is
no need to run the algorithm until convergence. A stabilizing so-
(25) lution is therefore obtained very early. A single SDP or convex
QP with line search suffices in most cases.
compares the step-length to the controller gains. The tolerances In the second example, we had to reduce the shift three times
before stability was reached. This was done according to rule
(26), applied to the closed-loop dynamics. In this example, the
criterion is flat about a (global) optimum, which allows
have been used in our numerical testing. For stopping we re- longer steps and renders the step-length-based termination crite-
quired that either the first two tests or the third one are satis- rion more stringent. Note that in this case the solution is globally
fied. For the enriched set, the number of frequencies has been optimal because a zero norm is reached.
limited to 50. They are selected according to our discussion in For test reasons, the pure stabilization problems have been
Section VI-G. It is sometimes possible to employ fewer frequen- solved with the steepest descent method in Section V. As ex-
cies, but generally better steps are performed when richer sets pected, this technique is less stable than algorithmic variants I
are used. Our choice appears reasonable and has been validated and II. Specifically, the choice of is rather critical and finding
on numerous experiments. It does not restrict efficiency since a general selection rule appears difficult. Together with our anal-
QP codes are very efficient up to 500 variables. ysis in Section VI, this encouraged us to use algorithmic variant
II in the syntheses problems that follow.
A. Stabilization
Pure stabilization may be regarded as synthesis under the B. Synthesis Problems
special form (6). The optimization program is (5), but we stop This section is devoted to synthesis problems. The syn-
the algorithm as soon as a stabilizing controller is obtained. It- thesis procedure is based on the scheme (3) and must be initial-
erating until a local optimum of (5) is reached does not seem ized with a stabilizing controller. This initial phase I is described
to improve any of the usual performance specifications of the in the previous section, but alternative techniques to find an ini-
stabilizing feedback controller. Such questions have to be ad- tial stabilizing controller may prove useful [29]. All exam-
dressed in a second procedure, where the stabilizing controller ples are extracted from the collection [40]. Column
will serve as a starting point. The spectral abscissa (7) is used to “problem” now indicates the acronym attached to
check stability of , but is not used as a cost function. Previous each example.
Authorized licensed use limited to: ONERA. Downloaded on February 11,2026 at [Link] UTC from IEEE Xplore. Restrictions apply.
APKARIAN AND NOLL: NONSMOOTH SYNTHESIS 83

Fig. 1. Max singular values (transport airplane “AC8”) versus frequency—first five iterations—”3” selected frequencies.

Notice that for a static , our program could formally be It often happens that stability of the performance channel
solved as alone already implies stability of the closed-loop
minimize system, so that the (internal) stability constraint in (27) is re-
dundant. This observation is significant, because this constraint
subject to (27) can be dropped, and the problem becomes unconstrained. In
where is a constant e.g., such that the initial stabilizing those cases where this does not hold true, we may if we want
controller satisfies to avoid the big- constraint in (27), introduce the composite
problem

and where refers to the closed-loop system matrix. Note


that the extra constraint in (27) maintains asymptotic stability
of the closed-loop system during the optimization of .
Authorized licensed use limited to: ONERA. Downloaded on February 11,2026 at [Link] UTC from IEEE Xplore. Restrictions apply.
84 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 1, JANUARY 2006

Fig. 2. Criticality measure (x) versus iterations for “BDT2.”

TABLE II
H SYNTHESIS WITH NONSMOOTH ALGORITHMIC VARIANT II

where the lower-right term enforces internal stability, with mented Lagrangian method [17], except for problems with large
a small enough parameter. The new problem can now be han- state dimension as “AC10” (55 states), “BDT2” (82 states),
dled by the techniques discussed so far without change. Similar “HF1” (130 states), and “CM4” (240 states), where the aug-
modifications apply to design problem with additional structural mented Lagrangian method fails, while the present nonsmooth
constraints on the controller. method is still functional. For these large systems, “AC10,”
We compare the results of our nonsmooth algorithm variant II “BDT2,” “HF1,” and “CM4,” we have observed that even
in columns “nonsmooth ” to older results obtained with the Riccati or LMI solvers encounter serious difficulties or even
specialized augmented Lagrangian (AL) algorithm described break down. Notice that our present nonsmooth technique (NS)
in [17], displayed in columns “ AL,” and to results ob- and the AL method are rigorous in the sense that they converge
tained with the FW algorithm described in [14], column “FW.” to local minima (critical points). It is therefore not surprising
In column “ full,” we display the gain obtained with a full- that NS and AL often achieve the same performance at the
order feedback controller, synthesized via the usual Riccati- same , for which optimality was established in [18].
based DGKF technique [2]. Note that it gives the best achiev- Note that in its original form, the FW method cannot solve
able performance and thus provides a lower bound for other problems where performance is optimized under constraints
techniques. on the order of the controller. We have therefore encapsulated
The results which we obtain with our nonsmooth technique FW into a dichotomy search in order to assure the best possible
are usually close to those previously obtained with the aug- performance. According to our numerical experiments, AL
Authorized licensed use limited to: ONERA. Downloaded on February 11,2026 at [Link] UTC from IEEE Xplore. Restrictions apply.
APKARIAN AND NOLL: NONSMOOTH SYNTHESIS 85

TABLE III
H CONTROLLERS FOR LARGE PROBLEMS WITH ALGORITHM II

and FW, which solve SDPs at every iteration, are no longer Note that the proposed tools and techniques easily extend to
functional even for medium size systems such as the Boeing multidisk problems and synthesis problems on prescribed fre-
problem “AC10.” Higher order problems like “BDT2,” “HF1,” quency intervals [49]. More importantly, they pave the way for
and “CM4” are completely intractable with these techniques, investigating an even larger scope of synthesis problems, char-
while the nonsmooth method continues to produce valid solu- acterized through frequency domain inequalities of the form
tions. We observed that FW, when functional, is outperformed , where is Hermitian-valued
both by AL and NS. We attribute this to the fact that, as opposed and stands for controller parameters and possibly multiplier
to AL and NS, FW is not supported by a sound convergence variables, as is the case when IQC formulations are used [50].
theory and, therefore, often stops at iterates which are not This is a strong incentive for further development and research.
even locally optimal. Examples of where this is the case are Also, a second-order version of our technique with enhanced
easily identified. It suffices to start AL or NS at iterates where asymptotic convergence is currently under investigation [51].
FW stops which almost always leads to further improvement.
As an example, when we initialized NS with the FW solution ACKNOWLEDGMENT
in example “AC8,” the performance was improved from
2.612 to 2.005. A phenomenon that we have also observed in The authors would like to thank F. Leibfritz, Universität Trier,
examples “HE1” and “REA2.” This is a strong argument in for providing the collection and for fruitful discus-
favor of those optimization techniques, which generate steps sions.
based on local convergence theory. It means that NS and AL
can be used to certify criticality of controllers obtained with REFERENCES
alternative methods. [1] P. Gahinet and P. Apkarian, “A linear matrix inequality approach to H
As an illustration of the nonsmooth technique, Fig. 1 shows control,” Int. J. Robust Nonlinear Control, vol. 4, pp. 421–148, 1994.
[2] J. Doyle, K. Glover, P. Khargonekar, and B. A. Francis, “State-space so-
lutions to standard H and H control problems,” IEEE Trans. Autom.
the evolution of the maximum singular value of
for example ‘AC8’ during the first 5 iterations. The stars Control, vol. 34, no. 8, pp. 831–847, Aug. 1989.
indicate the frequencies that were regrouped in the set [3] A. Nemirovskii, “Several NP-hard problems arising in robust stability
analysis,” Math. Control, Signals, Syst., vol. 6, no. 1, pp. 99–105, 1994.
to construct a bundle of Clarke subgradients at iterate . [4] V. Blondel and M. Gevers, “Simultaneous stabilizability question of
Also, Fig. 2 depicts the evolution of the absolute value of the three linear systems is rationally undecidable,” Math. Control, Signals,
criticality measure of algorithm variant II (19) and (20) versus Systems, vol. 6, no. 1, pp. 135–145, 1994.
[5] D. Henrion, M. Kocvara, and M. Stingl, “Solving simultaneous stabiliza-
iterations. As theoretically expected, gradually tends to tion BMI problems with PENNON,” presented at the IFIP Conf. System
zero until a local minimum is reached. Finally, controllers for Modeling and Optimization, vol. 7, France, July 2003.
large problems “AC10,” “BDT2,” “HF1,” and “CM4” are given [6] P. Apkarian, D. Noll, J. B. Thevenet, and H. D. Tuan, “A spectral
in Table III. quadratic-SDP method with applications to fixed-order H and H
synthesis,” Eur. J. Control, vol. 10, no. 6, pp. 527–538, 2003.
[7] J. B. Thevenet, D. Noll, and P. Apkarian, “Nonlinear spectral SDP
method for BMI-constrained problems: Applications to control design,”
VIII. CONCLUSION in Informatics in Control, Automation and Robotics. Norwell, MA:
Kluwer, 2005.
We have proposed several new algorithms to minimize the [8] F. Leibfritz and E. M. E. Mostafa, “An interior point constrained trust re-
norm subject to structural constraints on the controller gion method for a special class of nonlinear semi-definite programming
dynamics. The proposed method uses nonsmooth techniques problems,” SIAM J. Control Optim., vol. 12, pp. 1048–1074, 2002.
[9] , “Trust region methods for solving the optimal output feedback
suited for synthesis and for semi-infinite eigenvalue or design problem,” Int. J. Control, vol. 76, pp. 501–519, 2000.
singular value optimization programs. Variant I and variant II [10] S. Boyd, V. Balakrishnan, and P. Kabamba, “A bisection method for
of our algorithm are supported by global convergence theory, a computing the H norm of a transfer matrix and related problems,”
Math. Control, Signals, Syst., vol. 2, no. 3, pp. 207–219, 1989.
crucial parameter for the reliability of an algorithm in practice. [11] S. Boyd and V. Balakrishnan, “A regularity result for the singular values
Variant II has been shown to perform satisfactorily on a number of a transfer matrix and a quadratically convergent algorithm for com-
puting its Loo-norm,” Syst. Control Lett., vol. 15, pp. 1–7, 1990.
of difficult examples. In particular, four examples with large
state dimension ( , and ) have
[12] P. Gahinet and P. Apkarian, “Numerical computation of the L norm
revisited,” in Proc. IEEE Conf. Decision and Control, Tucson, AZ, 1992,
been solved. pp. 2257–2258.
Authorized licensed use limited to: ONERA. Downloaded on February 11,2026 at [Link] UTC from IEEE Xplore. Restrictions apply.
86 IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 51, NO. 1, JANUARY 2006

[13] M. C. de Oliveira and J. C. Geromel, “Numerical comparison of output L


[40] F. Leibfritz, “COMP IB, constraint matrix-optimization problem li-
feedback design methods,” in Proc. Amer. Control Conf., Albuquerque, brary—A collection of test examples for nonlinear semidefinite pro-
NM, Jun. 1997, pp. 72–76. grams, control system design and related problems,” Universität Trier,
[14] L. ElGhaoui, F. Oustry, and M. AitRami, “An algorithm for static output- Tech. Rep., 2003.
feedback and related problems,” IEEE Trans. Autom. Control, vol. 42, [41] P. Gahinet, A. Nemirovski, A. J. Laub, and M. Chilali, LMI Control
no. 8, pp. 1171–1176, Aug. 1997. Toolbox: The MathWorks Inc., 1995.
[15] K. M. Grigoriadis and R. E. Skelton, “Low-order control design for LMI [42] K. Schittkowski, “QLD: A Fortran code for quadratic programming,”
problems using alternating projection methods,” Automatica, vol. 32, no. Mathe. Institut, Univ. Bayreuth, Bayreuth, Germany, Tech. Rep., 1986.
8, pp. 1117–1125, Sept. 1996. [43] D. Gangsaas, K. Bruce, J. Blight, and U.-L. Ly, “Application of modern
[16] M. Frank and P. Wolfe, “An algorithm for quadratic programming,” synthesis to aircraft control: Three case studies,” IEEE Trans. Autom.
Naval Res. Log. Quart., vol. 3, pp. 95–110, 1956. Control, vol. AC-31, no. 11, pp. 995–1014, Nov. 1986.
[17] P. Apkarian, D. Noll, and H. D. Tuan, “Fixed-order H control design [44] H. P. Horisberger and P. R. Belanger, “Solution of the optimal constant
via an augmented Lagrangian method,” Int. J. Robust Nonlinear Control, output feedback problem by conjugate gradients,” IEEE Trans. Autom.
vol. 13, no. 12, pp. 1137–1148, 2003. Control, vol. AC-19, no. 4, pp. 434–435, Aug. 1974.
[18] D. Noll, M. Torki, and P. Apkarian, “Partially augmented Lagrangian [45] L. H. Keel, S. P. Bhattacharyya, and J. W. Howze, “Robust control with
method for matrix inequality constraints,” SIAM J. Optim., vol. 15, no. structured perturbations,” IEEE Trans. Autom. Control, vol. 36, no. 1,
1, pp. 161–184, 2004. pp. 68–77, Jan. 1988.
[19] X. Xin, “Reduced-order controllers for the H control problem with [46] Y. S. Hung and A. G. J. MacFarlane, Multivariable Feedback: A
unstable invariant zeros,” Automatica, vol. 40, no. 2, pp. 319–326, Feb. Classical Approach, ser. Lecture Notes in Control and Information
2004. Sciences. New York: Springer-Verlag, 1982.
[20] X. Xin, L. Guo, and C. Feng, “Reduced-order controllers for continuous [47] B. M. Chen, H Control and Its Applications, ser. Lecture Notes in
and discrete-time singular H control problems based on LMI,” Auto- Control and Information Sciences. New York: Springer-Verlag, 1998,
matica, vol. 32, no. 11, pp. 1581–1585, Nov. 1996. vol. 235.
[21] A. A. Stoorvogel, A. Saberi, and B. M. Chen, “A reduced-order oberver [48] E. J. Davison, Ed., Benchmark Problems for Control System De-
based controller design for H -optimization,” IEEE Trans. Autom. sign. Oxford, U.K.: Pergamon, 1990, IFAC Tech. Committee Rep..
Control, vol. 39, no. 2, pp. 355–360, Feb. 1994. [49] P. Apkarian and D. Noll, “Nonsmooth optimization for multidisk H
[22] F. H. Clarke, Optimization and Nonsmooth Analysis, ser. Canadian synthesis,” paper, 2005, submitted for publication.
Math. Soc. Series. New York: Wiley, 1983. [50] A. Megretski and A. Rantzer, “System analysis via integral quadratic
[23] J. Cullum, W. Donath, and P. Wolfe, “The minimization of certain non- constraints,” IEEE Trans. Autom. Control, vol. 42, no. 6, pp. 819–830,
differentiable sums of eigenvalues of symmetric matrices,” Math. Pro- Jun. 1997.
gram. Stud., vol. 3, pp. 35–55, 1975. [51] V. Bompart, D. Noll, and P. Apkarian, “Second-order nonsmooth opti-
[24] C. Lemaréchal and F. Oustry, “Nonsmooth algorithms to solve semidefi- mization for H and H syntheses,” paper, 2005, submitted for publi-
nite programs,” in SIAM Advances in Linear Matrix Inequality Methods cation.
in Control Series, L. El Ghaoui and S.-I. Niculescu, Eds., 2000.
[25] F. Oustry, “A second-order bundle method to minimize the maximum
eigenvalue function,” Math. Program. Ser. A, vol. 89, no. 1, pp. 1–33,
2000.
[26] B. Anderson and S. Vongpanitlerd, Network Analysis and Synthesis: A
Modern Systems Theory Approach. Upper Saddle River, NJ: Prentice- Pierre Apkarian (A’94–M’00) received the Ph.D.
Hall, 1973. degree in control engineering from the Ecole Na-
[27] C. A. Desoer and M. Vidyasagar, Feedback Systems: Input-Output Prop- tionale Supérieure de l’Aéronautique et de l’Espace
erties. New York: Academic, 1975. (ENSAE), France, in 1988, and was qualified as
[28] S. Boyd and C. Barratt, Linear Controller Design: Limits of Perfor- a Professor from the Université Paul Sabatier,
mance. Upper Saddle River, NJ: Prentice-Hall, 1991. Toulouse, France, in both control engineering and
[29] P. Apkarian and D. Noll, “Controller design via nonsmooth multidirec- applied mathematics in 1999 and 2001, respectively.
tional search,” SIAM J. Control Optim., 2006, to be published. Since 1988, he has been a Research Scientist
[30] J. Burke, A. Lewis, and M. Overton, “Two numerical methods for opti- at ONERA-CERT and an Associate Professor in
mizing matrix stability,” Linear Alg. Appl., pp. 147–184, 2002. the Mathematics Department of Université Paul
[31] , “Robust stability and a criss-cross algorithm for pseudospectra,” Sabatier. His research interests include robust and
IMA J. Numer. Anal., vol. 23, pp. 1–17, 2003. gain-scheduling control theory, LMI techniques and mathematical program-
[32] , “A robust gradient sampling algorithm for nonsmooth, nonconvex ming, and applications in aeronautics.
optimization,” SIAM J. Optim., vol. 15, pp. 751–779, 2005. Dr. Apkarian has served as an Associate Editor for the IEEE TRANSACTIONS
[33] K. Glover, “All optimal Hankel-norm approximations of linear multi- ON AUTOMATIC CONTROL.
variable systems and their L -error bounds,” Int. J. Control, vol. 39,
pp. 1115–1193, 1984.
[34] R. A. Horn and C. A. Johnson, Matrix Analysis. Cambridge, U.K.:
Cambridge Univ. Press, 1985.
[35] E. Polak, “On the mathematical foundations of nondifferentiable opti-
mization in engineering design,” SIAM Rev., vol. 29, pp. 21–89, Mar. Dominikus Noll received the Ph.D. degree and his
1987. habilitation in mathematics from the Universität
[36] , Optimization: Algorithms and Consistent Approxima- Stuttgart, Germany, in 1983 and 1989, respectively.
tions. Providence, RI: AMS, 1997. Since then, he has held temporary positions at
[37] D. Noll and P. Apkarian, “Spectral bundle methods for nonconvex max- Dalhousie University and the University of Waterloo,
imum eigenvalue functions: First-order methods,” Math. Program. Ser. Canada, and since 1995, has been a Professor of
B, vol. 104, no. 2, pp. 701–727, Nov. 2005. applied mathematics at Université Paul Sabatier,
[38] , “Spectral bundle methods for nonconvex maximum eigenvalue Toulouse, France. His research interests are in
functions: Second-order methods,” Math. Program. Ser. B, vol. 104, no. continuous optimization, medical imaging, and
2, pp. 729–747, Nov. 2005. automatic control.
[39] O. Pironneau and E. Polak, “On the rate of convergence of a class of Dr. Noll is an Associate Editor of the journal
methods of centers,” Math. Program., vol. 2, no. 2, pp. 230–258, 1972. Convex Analysis.

Authorized licensed use limited to: ONERA. Downloaded on February 11,2026 at [Link] UTC from IEEE Xplore. Restrictions apply.

You might also like