0% found this document useful (0 votes)
116 views

A Tutorial On Meta-Heuristics For Optimization

This document provides an overview of heuristic optimization methods and genetic algorithms. It discusses how genetic algorithms are inspired by natural evolution and use techniques like selection, crossover and mutation to optimize solutions. The document then presents an example of using a genetic algorithm to optimize a test function, showing how representations, populations, operations and parameters work. It concludes by discussing characteristics, limitations and variants of genetic algorithms.
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
116 views

A Tutorial On Meta-Heuristics For Optimization

This document provides an overview of heuristic optimization methods and genetic algorithms. It discusses how genetic algorithms are inspired by natural evolution and use techniques like selection, crossover and mutation to optimize solutions. The document then presents an example of using a genetic algorithm to optimize a test function, showing how representations, populations, operations and parameters work. It concludes by discussing characteristics, limitations and variants of genetic algorithms.
Copyright
© © All Rights Reserved
Available Formats
Download as PPT, PDF, TXT or read online on Scribd
You are on page 1/ 75

Heuristic Optimization Methods

A Tutorial on Meta-Heuristics
for Optimization
Chin-Shiuh Shieh

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

Abstract
Nature has inspired computing and
engineering researchers in many different
ways.
Natural processes have been emulated
through a variety of techniques including
genetic algorithms, ant systems and
particle swarm optimization, as
computational models for optimization.
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

Introduction
Optimization problems arise from almost every
field ranging from academic research to
industrial application.
Meta-heuristics, such as genetic algorithms,
particle swarm optimization and ant colony
systems, have received increasing attention in
recent years for their interesting characteristics
and their success in solving problems in a
number of realms.
Detailed implementations in C are given.
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

Introduction (cont)

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

Genetic Algorithms
Darwin's theory of natural evolution
Creatures compete with each other for limited
resources.
Those individuals that survive in the competition have
the opportunity to reproduce and generate
descendants.
Exchange of genes by mating may result in superior
or inferior descendants.
The process of natural selection eventually filtering
out inferior individuals and retain those adapted best
to their environment.
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

Genetic Algorithms (cont)


Genetic algorithms
introduced by J. Holland in 1975
work on a population of potential solutions, in
the form of chromosomes ( ),
try to locate a best solution through the
process of artificial evolution, which consist of
repeated artificial genetic operations, namely
Evaluation
Selection
Crossover and mutation
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

Test Function

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

Test Function (cont)

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

Representation
Select an adequate coding scheme to
represent potential solutions in the search
space in the form of chromosomes.
binary string coding for numerical optimization
expression trees for genetic programming
city index permutation for the travelling
salesperson problem

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

Representation (cont)
We use a typical binary string coding for the test
function F1
Each genotype has 16 bits to encode an independent
variable.
A decoding function maps the 65536 possible
combinations of b15 b0 onto the range [-5,5) linearly.
A chromosome is then formed by cascading
genotypes for each variable.
With this coding scheme, any 32 bit binary string
stands for a legal point in the problem domain.
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

10

Representation (cont)

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

11

Representation (cont)

1110101110110011| 0010110011111010
(1110101110110011)2=(60339)10
x = 60339/216*10-5=4.207000732421875
(0010110011111010)2=(11514)10
y = 11514/216*10-5=-3.24310302734375

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

12

Population Size
The choice of population size, N, is a
tradeoff between solution quality and
computation cost.
A larger population size will maintain
higher genetic diversity and therefore a
higher possibility of locating global
optimum, however, at a higher
computational cost.
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

13

Operation of Genetic Algorithms


Step 1 Initialization
Step 2 Evaluation
Step 3 Selection
Step 4 Crossover
Step 5 Mutation
Step 6 Termination Checking
Go to Step 2 if not terminated
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

14

Step 1 Initialization
Each bit of all N chromosomes in the population
is randomly set to 0 or 1.
This operation in effect spreads chromosomes
randomly into the problem domains.
Whenever possible, it is suggested to
incorporate any a priori knowledge of the search
space into the initialization process to endow the
genetic algorithm with a better starting point.

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

15

Step 2 Evaluation

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

16

Step 3 Selection

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

17

f1=1, f2=2, f3=3


For SF=1, Pr(c2 be selected) =2/
(1+2+3)=0.33
For SF=3, Pr(c2 be selected) =23/
(13+23+33)=0.22

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

18

f1=1, f2=2,f3=3,f4=4,f5=5
SF=1
p1=1/(1+2+3+4+5)=1/15=0.067
p2=2/15=0.133
p3=3/15=0.200
p4=4/15=0.267
p5=5/15=0.333
p1+p2+p3+p4+p5=1
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

19

tmpf=0.642
p1+p2++pn>0.642

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

20

Step 3 Selection (cont)


As a result, better chromosomes will have
more copies in the new population,
mimicking the process of natural selection.
In some applications, the best
chromosome found is always retained in
the next generation to ensure its genetic
material remains in the gene pool.

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

21

Step 4 Crossover
Pairs of chromosomes in the newly
generated population are subject to a
crossover (or swap) operation with
probability PC, called Crossover Rate.
The crossover operator generates new
chromosomes by exchanging genetic
material of pair of chromosomes across
randomly selected sites, as depicted in
Figure 3.
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

22

Step 4 Crossover (cont)

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

23

Step 4 Crossover (cont)


Similar to the process of natural breeding,
the newly generated chromosomes can be
better or worse than their parents. They
will be tested in the subsequent selection
process, and only those which are an
improvement will thrive.

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

24

Step 5 Mutation
After the crossover operation, each bit of
all chromosomes are subjected to
mutation with probability PM, called the
Mutation Rate.
Mutation flips bit values and introduces
new genetic material into the gene pool.

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

25

Step 5 Mutation (cont)


This operation is essential to avoid the
entire population converging to a single
instance of a chromosome, since
crossover becomes ineffective in such
situations.
In most applications, the mutation rate
should be kept low and acts as a
background operator to prevent genetic
algorithms from random walking.
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

26

Step 6 Termination Checking


Genetic algorithms repeat Step 2 to Step 5
until a given termination criterion is met,
such as pre-defined number of
generations or quality improvement has
failed to have progressed for a given
number of generations.
Once terminated, the algorithm reports the
best chromosome it found.
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

27

Experiment Results
The global optimum is located at approximately
F1(1.9931,1.9896) = 4.2947.
With a population of size 10, after 20
generations, the genetic algorithm was capable
of locating a near optimal solution at
F1(1.9853,1.9810) = 4.2942.
Due to the stochastic nature of genetic
algorithms, the same program may produce a
different results on different machines.
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

28

Experiment Results (cont)

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

29

Discussions
Important characteristics providing
robustness
They search from a population of points rather
than a single point.
The use the object function directly, not their
derivative.
They use probabilistic transition rules, not
deterministic ones, to guide the search toward
promising region.
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

30

Discussions (cont)
In effect, genetic algorithms maintain a
population of candidate solutions and
conduct stochastic searches via
information selection and exchange.
It is well recognized that, with genetic
algorithms, near-optimal solutions can be
obtained within justified computation cost.

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

31

Discussions (cont)
However, it is difficult for genetic
algorithms to pin point the global optimum.
In practice, a hybrid approach is
recommended by incorporating gradientbased or local greedy optimization
techniques.
In such integration, genetic algorithms act
as course-grain optimizers and gradientbased method as fine-grain ones.
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

32

Discussions (cont)
The power of genetic algorithms originates
from the chromosome coding and
associated genetic operators.
It is worth paying attention to these issues
so that genetic algorithms can explore the
search space more efficiently.

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

33

Discussions (cont)
The selection factor controls the discrimination
between superior and inferior chromosomes.
In some applications, more sophisticated
reshaping of the fitness landscape may be
required.
Other selection schemes (Whitley 1993), such
as rank-based selection, or tournament selection
are possible alternatives for the controlling of
discrimination.
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

34

Variants
Parallel genetic algorithms
Island-model genetic algorithms
maintain genetic diversity by splitting a
population into several sub-populations, each
of them evolves independently and
occasionally exchanges information with each
other

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

35

Variants (cont)
Multiple-objective genetic algorithms
attempt to locate all near-optimal solutions by
careful controlling the number of copies of
superior chromosomes such that the
population will not be dominated by the single
best chromosome

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

36

Variants (cont)
Co-evolutionary systems
have two or more independently evolved
populations. The object function for each
population is not static, but a dynamic function
depends on the current states of other
populations.
This architecture vividly models interaction
systems, such as prey and predator, virus and
immune system.
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

37

Particle Swarm Optimization


Some social systems of natural species,
such as flocks of birds and schools of fish,
possess interesting collective behavior.
In these systems, globally sophisticated
behavior emerges from local, indirect
communication amongst simple agents
with only limited capabilities.

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

38

Particle Swarm Optimization (cont)


Kennedy and Eberthart (1995) realized that an
optimization problem can be formulated as that
of a flock of birds flying across an area seeking a
location with abundant food.
This observation, together with some abstraction
and modification techniques, led to the
development of a novel optimization technique
particle swarm optimization.

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

39

Particle Swarm Optimization (cont)


Particle swarm optimization optimizes an
object function by conducting a
population-based search.
The population consists of potential
solutions, called particles, which are a
metaphor of birds in flocks.
These particles are randomly initialized
and freely fly across the multi-dimensional
search space.
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

40

Particle Swarm Optimization (cont)


During flight, each particle updates its
velocity and position based on the best
experience of its own and the entire
population.
The updating policy will drive the particle
swarm to move toward region with higher
object value, and eventually all particles
will gather around the point with highest
object value.
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

41

Step 1 Initialization
The velocity and position of all particles
are randomly set to within pre-specified or
legal range.

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

42

Step 2 Velocity Updating

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

43

Step 2 Velocity Updating (cont)


The inclusion of random variables endows
the particle swarm optimization with the
ability of stochastic searching.
The weighting factors, c1 and c2,
compromises the inevitable tradeoff
between exploration and exploitation.
After the updating, vi should be checked
and clamped to pre-specified range to
avoid violent random walking.
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

44

Step 3 Position Updating

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

45

Step 4 Memory Updating

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

46

Step 5 Termination Checking

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

47

Test Function

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

48

Experiment Results

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

49

Distribution of Particles

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

50

Distribution of Particles (cont)

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

51

Variants
A discrete binary version of the particle swarm
optimization algorithm was proposed by Kennedy and
Eberhart (1997).
Shi and Eberhart (2001) applied fuzzy theory to particle
swarm optimization algorithm.
Successfully incorporated the concept of co-evolution in
solving min-max problems (Shi and Krohling 2002).
(Chu et al. 2003) have proposed a parallel architecture
with communication mechanisms for information
exchange among independent particle groups, in which
solution quality can be significantly improved.

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

52

Ant System
Inspired by the food-seeking behavior of
real ants, Ant Systems, attributable to
Dorigo et al. (Dorigo et al. 1996), has
demonstrated itself to be an efficient,
effective tool for combinatorial optimization
problems.

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

53

Ant System (cont)


In nature, a real ant wandering in its surrounding
environment will leave a biological trace, called
pheromone, on its path.
The intensity of left pheromone will bias the
path-taking decision of subsequent ants.
A shorter path will possess higher pheromone
concentration and therefore encourage
subsequent ants to follow it.
As a result, an initially irregular path from nest to
food will eventually contract to a shorter path.
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

54

Ant System (cont)


With appropriate abstraction and
modification, this observation has led to a
number of successful computational
models for combinatorial optimization.

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

55

Test Problem
Travelling Salesman Problem
In the TSP, a travelling salesman problem
is looking for a route which covers all cities
with minimal total distance.

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

56

Test Problem (cont)

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

57

Operation
Suppose there are n cities and m ants.
The entire algorithm starts with initial pheromone
intensity set to 0 on all edges.
In every subsequent ant system cycle, or
episode, each ant begins its trip from a randomly
selected starting city and is required to visit
every city exactly once (a Hamiltonian Circuit).
The experience gained in this phase is then
used to update the pheromone intensity on all
edges.
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

58

Step 1 Initialization
Initial pheromone intensities on all edges
are set to 0.

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

59

Step 2 Walking phase


In this phase, each ant begins its trip from
a randomly selected starting city and is
required to visit every city exactly once.
When an ant, the k-th ant for example, is
located at city r and needs to decide the
next city s, the path-taking decision is
made stochastically based on the
following probability function:
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

60

Step 2 Walking phase (cont)

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

61

Step 2 Walking phase (cont)


According to Equation 6, an ant will favor a
nearer city or a path with higher
pheromone intensity.
is parameter used to control the relative
weighting of these two factors.
During the circuit, the route made by each
ant is recorded for pheromone updating in
step 3.
The best route found so far is also tracked.
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

62

Step 3 Updating phase

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

63

Step 3 Updating phase (cont)


The updated pheromone intensities are then
used to guide the path-taking decision in the
next ant system cycle.
It can be expected that, as the ant system cycle
proceeds, the pheromone intensities on the
edges will converge to values reflecting their
potential for being components of the shortest
route.
The higher the intensity, the more chance of
being a link in the shortest route, and vice visa.
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

64

Step 4 Termination Checking


Ant systems repeat Step 2 to Step 3 until
certain termination criteria are met
A pre-defined number of episodes is
performed
the algorithm has failed to make
improvements for certain number of episodes.

Once terminated, ant system reports the


shortest route found.
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

65

Experiment Results
Figure 6 reports
a found shortest
route of length
3.308, which is
the truly
shortest route
validated by
exhaustive
search.
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

66

Experiment Results (cont)

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

67

Ant Colony System


A close inspection on the ant system
reveals that the heavy computation
required may make it prohibitive in certain
applications.
Ant Colony Systems was introduced by
Dorigo et al. (Dorigo and Gambardella
1997) to remedy this difficulty.

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

68

Ant Colony System (cont)


Ant colony systems differ from the simpler
ant system in the following ways:
Explicit control on exploration and exploitation
Local updating
Count only the shortest route in global
updating

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

69

Explicit control on exploration and


exploitation
When an ant is located at city r and needs
to decide the next city s, there are two
modes for the path-taking decision,
namely exploitation and biased
exploration.
Which mode to be used is governed by a
random variable 0 < q < 1,

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

70

Explicit control on exploration and


exploitation (cont)

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

71

Local updating
A local updating rule is applied whenever
a edge from city r to city s is taken:

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

72

Count only the shortest route in


global updating
As all ants complete their circuits, the
shortest route found in the current episode
is used in the global updating rule:

Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

73

Discussion
In some respects, the ant system has
implemented the idea of emergent computation
a global solution emerges as distributed
agents performing local transactions, which is
the working paradigm of real ants.
The success of ant systems in combinatorial
optimization makes it a promising tool for
dealing with a large set of problems in the NPcomplete class (Papadimitriou and Steiglitz
1982).
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

74

Variants
In addition, the work of Wang and Wu
(Wang and Wu 2001) has extended the
applicability of ant systems further into
continuous search space.
Chu et al. (2003) have proposed a parallel
ant colony system, in which groups of ant
colonies explore the search space
independently and exchange their
experiences at certain time intervals.
Spring, 2013

C.-S. Shieh, EC, KUAS,


Taiwan

75

You might also like