A Tutorial On Meta-Heuristics For Optimization
A Tutorial On Meta-Heuristics For Optimization
A Tutorial on Meta-Heuristics
for Optimization
Chin-Shiuh Shieh
Spring, 2013
Abstract
Nature has inspired computing and
engineering researchers in many different
ways.
Natural processes have been emulated
through a variety of techniques including
genetic algorithms, ant systems and
particle swarm optimization, as
computational models for optimization.
Spring, 2013
Introduction
Optimization problems arise from almost every
field ranging from academic research to
industrial application.
Meta-heuristics, such as genetic algorithms,
particle swarm optimization and ant colony
systems, have received increasing attention in
recent years for their interesting characteristics
and their success in solving problems in a
number of realms.
Detailed implementations in C are given.
Spring, 2013
Introduction (cont)
Spring, 2013
Genetic Algorithms
Darwin's theory of natural evolution
Creatures compete with each other for limited
resources.
Those individuals that survive in the competition have
the opportunity to reproduce and generate
descendants.
Exchange of genes by mating may result in superior
or inferior descendants.
The process of natural selection eventually filtering
out inferior individuals and retain those adapted best
to their environment.
Spring, 2013
Test Function
Spring, 2013
Spring, 2013
Representation
Select an adequate coding scheme to
represent potential solutions in the search
space in the form of chromosomes.
binary string coding for numerical optimization
expression trees for genetic programming
city index permutation for the travelling
salesperson problem
Spring, 2013
Representation (cont)
We use a typical binary string coding for the test
function F1
Each genotype has 16 bits to encode an independent
variable.
A decoding function maps the 65536 possible
combinations of b15 b0 onto the range [-5,5) linearly.
A chromosome is then formed by cascading
genotypes for each variable.
With this coding scheme, any 32 bit binary string
stands for a legal point in the problem domain.
Spring, 2013
10
Representation (cont)
Spring, 2013
11
Representation (cont)
1110101110110011| 0010110011111010
(1110101110110011)2=(60339)10
x = 60339/216*10-5=4.207000732421875
(0010110011111010)2=(11514)10
y = 11514/216*10-5=-3.24310302734375
Spring, 2013
12
Population Size
The choice of population size, N, is a
tradeoff between solution quality and
computation cost.
A larger population size will maintain
higher genetic diversity and therefore a
higher possibility of locating global
optimum, however, at a higher
computational cost.
Spring, 2013
13
14
Step 1 Initialization
Each bit of all N chromosomes in the population
is randomly set to 0 or 1.
This operation in effect spreads chromosomes
randomly into the problem domains.
Whenever possible, it is suggested to
incorporate any a priori knowledge of the search
space into the initialization process to endow the
genetic algorithm with a better starting point.
Spring, 2013
15
Step 2 Evaluation
Spring, 2013
16
Step 3 Selection
Spring, 2013
17
Spring, 2013
18
f1=1, f2=2,f3=3,f4=4,f5=5
SF=1
p1=1/(1+2+3+4+5)=1/15=0.067
p2=2/15=0.133
p3=3/15=0.200
p4=4/15=0.267
p5=5/15=0.333
p1+p2+p3+p4+p5=1
Spring, 2013
19
tmpf=0.642
p1+p2++pn>0.642
Spring, 2013
20
Spring, 2013
21
Step 4 Crossover
Pairs of chromosomes in the newly
generated population are subject to a
crossover (or swap) operation with
probability PC, called Crossover Rate.
The crossover operator generates new
chromosomes by exchanging genetic
material of pair of chromosomes across
randomly selected sites, as depicted in
Figure 3.
Spring, 2013
22
Spring, 2013
23
Spring, 2013
24
Step 5 Mutation
After the crossover operation, each bit of
all chromosomes are subjected to
mutation with probability PM, called the
Mutation Rate.
Mutation flips bit values and introduces
new genetic material into the gene pool.
Spring, 2013
25
26
27
Experiment Results
The global optimum is located at approximately
F1(1.9931,1.9896) = 4.2947.
With a population of size 10, after 20
generations, the genetic algorithm was capable
of locating a near optimal solution at
F1(1.9853,1.9810) = 4.2942.
Due to the stochastic nature of genetic
algorithms, the same program may produce a
different results on different machines.
Spring, 2013
28
Spring, 2013
29
Discussions
Important characteristics providing
robustness
They search from a population of points rather
than a single point.
The use the object function directly, not their
derivative.
They use probabilistic transition rules, not
deterministic ones, to guide the search toward
promising region.
Spring, 2013
30
Discussions (cont)
In effect, genetic algorithms maintain a
population of candidate solutions and
conduct stochastic searches via
information selection and exchange.
It is well recognized that, with genetic
algorithms, near-optimal solutions can be
obtained within justified computation cost.
Spring, 2013
31
Discussions (cont)
However, it is difficult for genetic
algorithms to pin point the global optimum.
In practice, a hybrid approach is
recommended by incorporating gradientbased or local greedy optimization
techniques.
In such integration, genetic algorithms act
as course-grain optimizers and gradientbased method as fine-grain ones.
Spring, 2013
32
Discussions (cont)
The power of genetic algorithms originates
from the chromosome coding and
associated genetic operators.
It is worth paying attention to these issues
so that genetic algorithms can explore the
search space more efficiently.
Spring, 2013
33
Discussions (cont)
The selection factor controls the discrimination
between superior and inferior chromosomes.
In some applications, more sophisticated
reshaping of the fitness landscape may be
required.
Other selection schemes (Whitley 1993), such
as rank-based selection, or tournament selection
are possible alternatives for the controlling of
discrimination.
Spring, 2013
34
Variants
Parallel genetic algorithms
Island-model genetic algorithms
maintain genetic diversity by splitting a
population into several sub-populations, each
of them evolves independently and
occasionally exchanges information with each
other
Spring, 2013
35
Variants (cont)
Multiple-objective genetic algorithms
attempt to locate all near-optimal solutions by
careful controlling the number of copies of
superior chromosomes such that the
population will not be dominated by the single
best chromosome
Spring, 2013
36
Variants (cont)
Co-evolutionary systems
have two or more independently evolved
populations. The object function for each
population is not static, but a dynamic function
depends on the current states of other
populations.
This architecture vividly models interaction
systems, such as prey and predator, virus and
immune system.
Spring, 2013
37
Spring, 2013
38
Spring, 2013
39
40
41
Step 1 Initialization
The velocity and position of all particles
are randomly set to within pre-specified or
legal range.
Spring, 2013
42
Spring, 2013
43
44
Spring, 2013
45
Spring, 2013
46
Spring, 2013
47
Test Function
Spring, 2013
48
Experiment Results
Spring, 2013
49
Distribution of Particles
Spring, 2013
50
Spring, 2013
51
Variants
A discrete binary version of the particle swarm
optimization algorithm was proposed by Kennedy and
Eberhart (1997).
Shi and Eberhart (2001) applied fuzzy theory to particle
swarm optimization algorithm.
Successfully incorporated the concept of co-evolution in
solving min-max problems (Shi and Krohling 2002).
(Chu et al. 2003) have proposed a parallel architecture
with communication mechanisms for information
exchange among independent particle groups, in which
solution quality can be significantly improved.
Spring, 2013
52
Ant System
Inspired by the food-seeking behavior of
real ants, Ant Systems, attributable to
Dorigo et al. (Dorigo et al. 1996), has
demonstrated itself to be an efficient,
effective tool for combinatorial optimization
problems.
Spring, 2013
53
54
Spring, 2013
55
Test Problem
Travelling Salesman Problem
In the TSP, a travelling salesman problem
is looking for a route which covers all cities
with minimal total distance.
Spring, 2013
56
Spring, 2013
57
Operation
Suppose there are n cities and m ants.
The entire algorithm starts with initial pheromone
intensity set to 0 on all edges.
In every subsequent ant system cycle, or
episode, each ant begins its trip from a randomly
selected starting city and is required to visit
every city exactly once (a Hamiltonian Circuit).
The experience gained in this phase is then
used to update the pheromone intensity on all
edges.
Spring, 2013
58
Step 1 Initialization
Initial pheromone intensities on all edges
are set to 0.
Spring, 2013
59
60
Spring, 2013
61
62
Spring, 2013
63
64
65
Experiment Results
Figure 6 reports
a found shortest
route of length
3.308, which is
the truly
shortest route
validated by
exhaustive
search.
Spring, 2013
66
Spring, 2013
67
Spring, 2013
68
Spring, 2013
69
Spring, 2013
70
Spring, 2013
71
Local updating
A local updating rule is applied whenever
a edge from city r to city s is taken:
Spring, 2013
72
Spring, 2013
73
Discussion
In some respects, the ant system has
implemented the idea of emergent computation
a global solution emerges as distributed
agents performing local transactions, which is
the working paradigm of real ants.
The success of ant systems in combinatorial
optimization makes it a promising tool for
dealing with a large set of problems in the NPcomplete class (Papadimitriou and Steiglitz
1982).
Spring, 2013
74
Variants
In addition, the work of Wang and Wu
(Wang and Wu 2001) has extended the
applicability of ant systems further into
continuous search space.
Chu et al. (2003) have proposed a parallel
ant colony system, in which groups of ant
colonies explore the search space
independently and exchange their
experiences at certain time intervals.
Spring, 2013
75