LaumannsMarco Paper 246
LaumannsMarco Paper 246
Abstract. This paper presents a scheme for generating the Pareto front
of multiobjective optimization problems by solving a sequence of con-
strained single-objective problems. Since the necessity of determining
the constraint value a priori can be a serious drawback of the original
epsilon-constraint method, our scheme generates appropriate constraint
values adaptively during the run. A simple example problem is presented
where the running time (measured by the number of constrained single-
objective sub-problems to be solved) of the original epsilon-constraint
method is exponential in the problem size (number of decision variables),
although the size of the Pareto set grows only linearly. For our method
we show that — independent of the problem or the problem size — the
time complexity is O(k m−1 ), where k is the number of Pareto-optimal so-
lutions to be found and m the number of objectives. Using the algorithm
together with a standard ILP solver for the constrained single-objective
problems, the exact Pareto front is generated for the three-objective 0/1
knapsack problem with up to 100 decision variables. Links to problem
instances and a reference implementation of the algorithm are provided.
1 Introduction
The purpose of this paper is to briefly recap the ideas and results from [4] and
to present an alternative algorithm. Instead of using box constraints (upper and
lower bounds), the new algorithm uses only lower bounds, i.e., less constraints
in the single-objective subproblems. Empirical tests on the knapsack problem
have shown that this improves the solution time of the underlying ILP solver
considerably. In addition, the problem of dealing with weakly Pareto-optimal
solutions has been solved more elegantly by two subsequent calls of the single-
objective optimization algorithm. The new algorithm is presented visually in a
flow chart, which should make reimplementation even easier.
An Adaptive Scheme to Generate the Pareto Front 3
2 Problem Scenario
The problem we are dealing with is to find the Pareto front of a general multi-
objective problem with m objectives. We assume that all objective functions are
to be maximized. We assume that the Pareto front is finite.
Definition 1 (Pareto optimality). Let f : X → F where X is called decision
space and F ⊆ IRm objective space. The elements of X are called decision
vectors and the elements of F objective vectors. A decision vector x ∗ ∈ X is
Pareto optimal if there is no other x ∈ X that dominates x∗ , where x dominates
x∗ , denoted as x x∗ , if fi (x) ≥ fi (x∗ ) for all i = 1, . . . , m and fi (x) > fi (x∗ )
for at least one index i. The set of all Pareto-optimal decision vectors X ∗ is
called Pareto set. F ∗ = f (X ∗ ) is the set of all Pareto-optimal objective vectors
and denoted as Pareto front.
In cases where several Pareto-optimal decision vectors map to the same
Pareto-optimal objective vector, we are satisfied with having one representa-
tive decision vector for each Pareto-optimal objective vector. This corresponds
to finding one optimal solution in the single-objective case [3].
Several methods exist devoted to this task, usually referred to as “a posteri-
ori” or “non-dominated solution generation” methods [1]. They typically define
a set of differently parameterized single-objective surrogate problems and apply
multiple runs of a single-objective optimizer. The choice of the parameter values
determines, which specific elements of the Pareto set are found. It is in general a
difficult and sometimes impossible task to choose a sequence of parameter values
such that the whole Pareto front is discovered. Consider for example the popular
methods based on the aggregation of the different objectives via a weighted sum.
Different weight vectors would ideally lead to finding different elements of the
Pareto front, but for an unknown problem it is not clear what weight combi-
nation to choose. Even if all possible weight combinations were used, it cannot
be guaranteed to find Pareto-optimal solutions in concave regions of the Pareto
front.
Another traditional method from the field of multiobjective optimization to
generate the whole Pareto front is the epsilon-constraint method [5]. The epsilon-
constraint method works by choosing one objective function as the only objective
and the remaining objective functions as constraints. By a systematic variation
of the constraint bounds, different elements of the Pareto front can be obtained.
The method relies on the availability of a procedure to solve constrained single-
objective problems. We abstract from the details of this procedure and simply
assume that it terminates after a fixed time T and returns either the optimum
of the constrained single-objective problem (cSOP)
maximize Φ(f (x)) ≡ Φ(f1 (x, . . . , fm (x))
subject to fi (x) > i ∀i ∈ {1, . . . , m}, (1)
x ∈ X,
where f : X −→ IRm and Φ : IRm −→ IR, or reports that the feasible region is
empty.
4 M. Laumanns, L. Thiele, E. Zitzler
P: ,Q: , R:
initializeBounds
m 1
i: P 1
: getBounds i , P
yes yes
y found ?
no
yes
z P: f z f y ?
no
Q: Q m
find x argmax f i x :x X f x f1 x f1 y
i 2
i: i 1
P: P x
no
i 0?
R: R
yes
updateBounds f x ,
space into rectangular axis-parallel co-domains. The coordinates for this grid
are determined by the function values of the already identified Pareto-optimal
solutions. These coordinates are stored in the matrix e = (e1 , . . . , em ), where
the ei are vectors containing the grid coordinates for objective i, defined by the
fi -values of all Pareto-optimal solutions found so far. These vectors ei initially
contain only suitable lower bounds, which can be guaranteed to be smaller than
all values of the respective objective component. In this case we chose a lower
bound of zero assuming a non-negative objective space.
In each iteration of the outer loop, one new Pareto-optimal point is sought.
This is achieved by performing two consecutive constrained single-objective op-
timization runs for each grid point as a lower bound in decreasing order of the
index i. After a new Pareto-optimal point x is found, this point is added to the
set of already found solutions, P , and the grid is updated by recording the ob-
jective values of x. For each objective j ∈ {1, . . . , m}, the value fj (x) is inserted
into the sorted vector ej . If no feasible solution can be found or the solution is
dominated by any previously found solution, the searched objective co-domain is
marked by storing the lower bound vector in the set Q. The purpose of function
getBounds is to translate the iteration counter i into the m − 1 corresponding
An Adaptive Scheme to Generate the Pareto Front 7
Function initializeBounds
1: for j := 1 to m do
2: ej := (0)
3: end for
indices to retrieve the right constraint values from each of the ei vectors. The
total number of calls to the single-objective optimization algorithm can now be
bounded by (k + 1)m−1 + k, where k is the cardinality of the Pareto front. Using
as upper bound T on the running time of the single-objective algorithm yield
the following result.
Theorem 1. The running time of the algorithm given in Fig. 1 to discover a
Pareto front of an m-objective problem with k elements is at most T [(k+1) m−1 +
k], where T is the running time of the single-objective optimization algorithm.
Sketch of proof: To show that the image of P at the end of the run equals
the Pareto front F ∗ , it has to be shown that (i) only Pareto-optimal solutions
enter P and (ii) no Pareto-optimal objective vector is missed. For (i) we observe
that x can only be dominated by solutions from regions which has already been
fully searched and the dominance check would then prevent x from entering P .
For (ii) it can be shown that any potentially missed Pareto-optimal objective
vector must necessarily be the image of the optimum y of some grid cell, and
was therefore already found. The bound on the number of calls to the single-
objective optimization algorithm is due to the observation that in each iteration
at least one region is marked as searched, and the maximum number of different
regions to be searched is bounded by the maximum number of grid points, which
8 M. Laumanns, L. Thiele, E. Zitzler
5 Simulation Results
This section presents some simulation results of the new algorithm on the mul-
tiobjective knapsack problem. The multiobjective knapsack problem is one of
the most extensively used benchmark problem for multiobjective metaheuristics
(see, e.g., [10], [11], or [7]). Given is a set of n items, each of which has m profit
and k weight values associated with it. The goal is to select a subset of items such
that the sums over each of their k-th weight values do not exceed given bounds
and the sums over each of their m-th profit values are maximized. A represen-
tation as a pseudo-Boolean optimization problem is typically used, where the n
binary decision variables denote whether an item is selected or not. For empirical
studies, the parameters of the problem, the weight and profit values, are usually
drawn at random from a given probability distribution. Here, the weights and
profits are randomly chosen integers between 10 and 100, and the capacities are
set to half of the sum of the weights.
Table 1 summarizes the results obtained for different instances of the knap-
sack problem with 3 objectives. 3 To the best of our knowledge no exact Pareto
fronts have been computed so far for the three-objective knapsack problem,
probably due to the lack of an appropriate generating method besides complete
enumeration. Note that the total number of single-objective runs given in Ta-
ble 1 is considerably lower than the upper bound given in Theorem 1. Fig. 3
gives a visual impression of the obtained three-dimensional Pareto fronts.
Table 1. Results on different instances of the knapsack problem with three objectives
n 10 20 30 40 50 100
single-objective runs 76 1622 9870 26846 128695 644689
total CPU time 4 sec 102 sec 20 min 62 min 445 min 255 h
|P | 9 61 195 389 1048 6501
3
The source code of the algorithm implemented in C and invoking CPLEX
as an example for an arbitrary single-objective optimization technique
is available from http://www.tik.ee.ethz.ch/~laumanns as well as the
data of the knapsack problem instances and the Pareto fronts from
http://www.tik.ee.ethz.ch/~zitzler/testdata.html.
An Adaptive Scheme to Generate the Pareto Front 9
1900
1800
1700
1600
1500
1400
1300
1200
1200 1300
1300 1400
1400 1500
1500 1600
1600 1700
1700 1800
1800 1900
1900 2000
2000 2100
4000
3800
3600
3400
3200
3000
2800
3000 2800
3200 3000
3400 3200
3600 3400
3600
3800 3800
4000 4000
4200 4200
Fig. 3. Knapsack problem: plots of the exact Pareto fronts for m = 3 objectives, n = 50
(top) and n = 100 (bottom).
10 M. Laumanns, L. Thiele, E. Zitzler
References
1. Hwang, C.L., Masud, A.S.M.: Multiple Objectives Decision Making—Methods and
Applications. Springer, Berlin (1979)
2. Miettinen, K.: Nonlinear Multiobjective Optimization. Kluwer, Boston (1999)
3. Ehrgott, M.: Multicriteria optimization. Springer, Berlin (2000)
4. Laumanns, M., Thiele, L., Zitzler, E.: An efficient, adaptive parameter variation
scheme for metaheuristics based on the epsilon-constraint method. European Jour-
nal of Operational Research (2004) In press. Accepted 13 August 2004. Available
online 17 November 2004.
5. Haimes, Y., Lasdon, L., Wismer, D.: On a bicriterion formulation of the problems
of integrated system identification and system optimization. IEEE Transactions
on Systems, Man, and Cybernetics 1 (1971) 296 – 297
6. Chankong, V., Haimes, Y.: Multiobjective Decision Making Theory and Method-
ology. Elsevier (1983)
7. Ranjithan, S.R., Chetan, S.K., Dakshina, H.K.: Constraint Method-Based Evolu-
tionary Algorithm (CMEA) for Multiobjective Optimization. In Zitzler, E., et al.,
eds.: Proceedings of the First International Conference on Evolutionary Multi-
Criterion Optimization (EMO 2001). Volume 1993 of Lecture Notes in Computer
Science., Berlin, Springer-Verlag (2001) 299–313
8. Droste, S., Jansen, T., Wegener, I.: On the analysis of the (1+1) evolutionary
algorithm. Theoretical Computer Science 276 (2002) 51–81
9. Srinivasan, V., Thompson, G.L.: Algorithms for minimizing total cost, bottleneck
time and bottleneck shipment in transportation problems. Naval Research Logistics
Quarterly 23 (1976) 567–598
10. Zitzler, E., Thiele, L.: Multiobjective Evolutionary Algorithms: A Comparative
Case Study and the Strength Pareto Approach. IEEE Transactions on Evolution-
ary Computation 3 (1999) 257–271
11. Jaszkiewicz, A.: On the performance of multiple objective genetic local search
on the 0/1 knapsack problem. a comparative experiment. IEEE Transactions on
Evolutionary Computation 6 (2002) 402–412
12. Laumanns, M., Thiele, L., Deb, K., Zitzler, E.: Combining convergence and diver-
sity in evolutionary multiobjective optimization. Evolutionary Computation 10
(2002) 263–282