0% found this document useful (0 votes)
15 views56 pages

Differential Evolution Algorithm Overview

Metahueristics algorithms

Uploaded by

Jaweria
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views56 pages

Differential Evolution Algorithm Overview

Metahueristics algorithms

Uploaded by

Jaweria
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Differential Evolution

DE
❖ It is a stochastic, population-based optimization algorithm for
solving nonlinear optimization problems

❖ Each solution is known as Chromosome.

❖ Each chromosome undergoes mutation followed by


recombination


Target Mutation Donor Recombination Trial
vector Vector vector

Selection of better solution is performed only after the


generation of all trial vectors
Greedy selection is performed between target and trial
vectors.
Optimization problem
Consider an optimization problem
Minimize f(x)

• Where X= [x1,x2 ,…, xD] , and D is the number of


variables.

• The algorithm was introduced by Storn and Price


in 1995
Population

This is a population-based algorithm

Consider a Population of size N

Where g is the generation and n=1,2,…D,


i=1,2,…Np
Each row of the population is a target vector.
Initial Population
• Initial population is generated randomly between upper lower and upper bound

• i= 1,2,3,…Np and n= 1,2,3,…,D

%%%%%%%Matlab commands to perform task%%%%%%%

• f = zeros(Np,1); % Vector to store the fitness function value of the


population members

• fu = zeros(Np,1); % Vector to store the fitness function value of the New
population members

• D = length(lb); % Determining the number of decision variables in the
problem

• U = zeros(Np,D); % Matrix to store the trial solutions

• P = repmat(lb,Np,1) + repmat((ub-lb),Np,1).*rand(Np,D); % Generation of the initial
population

• for p = 1:Np
• f(p) = prob(P(p,:)); % Evaluating the fitness function of the initial population
• end
Mutation
From each target vector, select three other vectors
randomly i.e.

##Random solutions ε {1,2,3,…,Np}


Add the weighted difference of two of the vectors to
the third to get the donor solution

All three random solutions are not equal also not equal
to the target solution
Where is called donor vector, F(scaling factor,
user defined parameter) is generally taken between 0
and 1.
Important points in Mutation

❖ Target vector is not involved in mutation.


❖ Total four vector are involved in the
❖ mutation of a target vector.
❖ Therefore Np≥4.
Matlab command for Mutation
%%%%%%Matlab comman%%%%%%%%%
• v= X1 + F*(X2 - X3);
Recombination/Crossover
A trial vector is developed from the target vector
and the donor vector

And

Where Irand is an integer between 1 and D

And Cp is the recombination (crossover)probability


(defined by user).
Matlab commands for Crossover

• %% Crossover

• del = randi(D,1); % Generating the
random varibale delta
• for j = 1:D

• if (rand <= Pc) || del == j % Check for donor
vector or target vector
• U(i,j) = V(j); % Accept variable
from donor vector
• else
• U(i,j) = P(i,j); % Accept variable
from target vector
• end

• end
• end

Selection
• The target vector is compared with the
trial vector and the one with the
• lowest function value is selected for the next
generation.
Greedy selection
Matlab Commands
• %% Bounding and Greedy Selection
• for j = 1:Np

• U(j,:) = min(ub,U(j,:)); % Bounding the violating
variables to their upper bound
• U(j,:) = max(lb,U(j,:)); % Bounding the violating
variables to their lower bound

• fu(j) = prob(U(j,:)); % Evaluating the fitness
of the trial solution

• if fu(j) < f(j) % Greedy selection
• P(j,:) = U(j,:); % Include the new
solution in population
• f(j) = fu(j); % Include the fitness
function value of the new solution in population
• end
• end


• end
Final output of matlab
• [bestfitness,ind] = min(f);
• bestsol = P(ind,:);
Flow chart of DE
Bio-Inspired Algorithms
Evolution and swarm
Background
Nature inspired algorithms can be classified as:
1. those based on Nature
2. those inspired by Biology(Bio inspired)

The Bio inspired algorithms can be further divided according to their


behaviors

1. Evolutionary inspiration (GA,DE)


2. Swarm behavior (PSO, ACO, CS etc)

Note: Physics and Chemistry based algorithms are nature based


algorithms. For example SA (nature inspired but not bio inspired).
Swarm Intelligence
Algorithms based on swarm behavior
Swarm (crowd/flock)
1. A large or dense group of flying insects.
2. A large number of honeybees that leave a hive en masse with a
newly fertilized queen in order to establish a new colony.
3. A large number of people or things.

We will discuss only the popular algorithms which are based on swarm
behaviors.

1. Ant colony optimization(ACO)


2. Particle Swarm Optimization(PSO)
3. Cuckoo Search (CS)
Ant Colony optimization (ACO)
Probabilistic technique for finding
optimal path
 The basic idea in ant colony optimization algorithms
(ACO) is to imitate the cooperative behavior of real
ants to solve optimization problems.

 ACO metaheuristic have been proposed by M.


Dorigo in the 90s in his Ph.D. thesis.

 Traditionally, ACO have been applied to


combinatorial optimization problems and they have
achieved widespread success in solving different
problems (e.g., scheduling, routing, assignment).
 The main interest of real ant’s behavior is that simple
ants using collective behavior perform complex tasks
such as transportation of food and finding shortest paths
to the food sources.

 ACO algorithms mimic the principle that using very


simple communication mechanism, an ant colony is able
to find the shortest path between two points.
Example
Let’s see an example of this.

Let consider there are two paths to reach the food from
the colony.
At first, there is no pheromone on the ground.
So, the probability of choosing these two paths is equal
that means 50%.
Let consider two ants choose two different paths to reach
the food as the probability of choosing these paths is fifty-
fifty.
The distances of these two paths are
different. Ant following the shorter path will
reach the food earlier than the other.
After finding food, it carries some food with itself and
returns to the colony.

When it tracking the returning path it deposits pheromone


on the ground. The ant following the shorter path will reach
the colony earlier.
When the third ant wants to go out for searching food it will
follow the path having shorter distance based on the
pheromone level on the ground.
As a shorter path has more pheromones than the longer,
the third ant will follow the path having more pheromones.
By the time the ant following the longer path returned to the
colony
More ants already have followed the path with more
pheromones level.
Then when another ant tries to reach the destination(food)
from the colony it will find that each path has the same
pheromone level.
So, it randomly chooses one. Let consider it choose the
above one(in the picture located below)
• Repeating this process again and again, after some
time:
• the shorter path has a more pheromone level than
others and has a higher probability to follow the path
• And all ants next time will follow the shorter path.
Algorithm
The given algorithm presents the template algorithm for
ACO.
First, the pheromone information is initialized.
The algorithm is mainly composed of two iterated steps:
1. solution construction and
2. pheromone update.
Pseudo Code
 Algorithm Template of the ACO.
 Initialize the pheromone trails ;
 Repeat
 For each ant Do
 Solution construction using the pheromone trail ;
 Update the pheromone trails:
 Evaporation ;
 Reinforcement ;
 Until Stopping criteria
 Output: Best solution found or a set of solutions
Particle Swarm Optimization
(PSO)
PSO

Particle swarm optimization (PSO) was developed by


Kennedy and Eberhart in 1995 based on swarm behavior in
nature, such as fish and bird schooling.

The movement towards a promising area to get the global


optimum.
 PSO is population based stochastic optimization technique
based on the movement and intelligence of swarms.

 In PSO, the concept of social interaction is used for solving


a problem.

 It uses a number of particles (agents) that constitute a swarm


moving around in the search space, looking for the best
solution.

 Each particle in the swarm looks for its positional coordinates


in the solution space, which are associated with the best
solution that has been achieved so far by that particle. It is
known as pbest or personal best.

 Another best value known as gbest or global best is tracked


by the PSO. This is the best possible value obtained so far by
any particle in the neighborhood of that particle.
Position Modification
Each particle modifies its position according to:
 its current position
 its current velocity
 the distance between its current position and pbest.
 The distance between its current position and gbest.
Working of PSO
1. Create a ‘population’ of agents (particles) which is uniformly
distributed over X.
2. Evaluate each particle’s position considering the objective
function( say the below function).
3. If a particle’s present position is better than its previous best
position, update it.
4. Find the best particle (according to the particle’s last best places).
5. Update particles’ velocities.

6. Move particles to their new positions.

7. Go to step 2 until the stopping criteria are satisfied.


Parameters
W is the inertia weight (a positive constant)
Role: balancing the global (exploration) and local
search (exploitation)

social influence

Intensification
Diversification
(searches new solutions) Personnel influence
Matlab code
• Will be shared later
Cuckoo Search
(CS)
Cuckoos
Cuckoos are fascinating birds, not only because of the
beautiful sounds they make but also because of their
aggressive reproduction strategy.

Important points:
1. Cuckoos don’t have their nests.
2. They search the nest of other species to lay their eggs.
3. There is a possibility of discovering cuckoos egg by the
host bird.
4. If host bird discovered the cuckoos egg, there are two
possibilities
a) The host bird can throw the egg away
Or
a) Abandon the nest and built the new nest
CS
CS is one of the latest nature-inspired metaheuristic
algorithms, developed in 2009 by Xin-She Yang and Suash
Deb.

CS is based on the brood parasitism of some cuckoo


species.

This algorithm is enhanced by the so-called Lévy flights


rather than by simple isotropic random walks.

CS is potentially far more efficient than PSO and GA.


Rules of CS
For simplicity in describing the standard CS, here we use the
following three idealized rules:

1. Each cuckoo lays one egg at a time and dumps it in a


randomly chosen nest.
2. The best nests with high-quality eggs will be carried over to
the next generations.
3. The number of available host nests is fixed, and the egg laid
by a cuckoo is discovered by the host bird with a probability
pa (0, 1). In this case, the host bird can either get rid of the
egg or simply abandon the nest and build a completely new
nest.
4. As a further approximation, this last assumption can be
approximated by replacing a fraction pa of the n host nests
with new nests (with new random solutions).
Implementation of CS
From the implementation point of view, we can use the
following simple representations:

1. Each egg in a nest represents a solution, and each


cuckoo can lay only one egg (thus representing one
solution).
2. The aim is to use the new and potentially better
solutions (cuckoos) to replace a not-so-good solution
in the nests.
3. This algorithm uses a balanced combination of a local
random walk and the global explorative random walk,
controlled by a switching parameter pa.

Local random walk


The local random walk can be written as

Where xj and xk are two different solutions at the tth


iteration, selected randomly by random permutation. H(u) is
a Heaviside function, ε is a random number drawn from a
uniform distribution, and s is the step size.
Global random walk

On the other hand, the global random walk is


carried out using Lévy flights
xt+1i= xt i+ αL(s, λ)

• (>0) step size scaling factor levy flights
Pseudo code

Based on the three rules, the basic steps of the CS can be


summarized as the pseudo code
Cuckoo Search via L´evy Flights
Begin
• Objective function f(x), x = (x1, ..., xd)T
• Generate initial population of n host nests xi
• while (t <MaxGeneration) or (stop criterion)
Get a cuckoo randomly
Generate a solution by L´evy flights
Evaluate its solution quality or objective value fi
Choose a nest among n (say, j) randomly
if (fi < fj ),
Replace j by the new solution i
end
• A fraction (pa) of worse nests are abandoned
• New nests/solutions are built/generated
• Keep best solutions (or nests with quality solutions)
• Rank the solutions and find the current best
• Update t ← t + 1
• end while
• Post process results and visualization.
End
Lecture#26
Topics to be covered

1. Hybrid Metaheuristics and


2. Multi-objective Optimization
Hybrid Metaheuristics
The concept of hybrid metaheuristics has been commonly
accepted only in recent years, even if the idea of combining
different metaheuristic strategies and algorithms dates back
to the 1980s.

Today, we can observe a generalized common agreement


on the advantage of combining components from
different search techniques and the tendency of
designing hybrid techniques is widespread in the fields of
operations research and artificial intelligence.

The consolidated interest around hybrid metaheuristics is


also demonstrated by publications on classifications,
taxonomies and overviews on the subject
Here, we adopt the definition of hybrid metaheuristic in the
broad sense

Two ways:

1. The first consists in designing a solver including


components from a metaheuristic into another one.
2. The second combines metaheuristics with other
techniques typical of fields such as operations research
and artificial intelligence (machine learning).
Multi-Objective Optimization
Problems
A multi-objective optimization problem has a number of
objective functions which are to be minimized or
maximized.

As in the single-objective optimization problem, here too


the problem usually has a number of constraints which any
feasible solution (including the optimal solution) must
satisfy. In the following, we state the multiobjective
optimization problem (MOOP) in its general form:
Definition

Minimize/ Maximize fm(x) where m=1,2,…,Mi


subject to gj(x) ≥ 0, j=1,2…,J
hk(x) = 0, k=1,2…,K
L<x<U
A solution x is a vector of n decision variables: x = (x1
,... , xn ).
The last set of constraints are called variable bounds,
restricting each decision variable Xi to take a value
within it.
Examples
1. Minimizing cost and maximizing comfort while
buying a car (Two objectives)

2. Maximizing performance while minimizing fuel


consumption and emission of pollutants of a
vehicle (Three objectives)

are examples of multi-objective optimization problems


involving two and three objectives, respectively. In practical
problems, there can be more than three objectives.
Multi-objective optimization
Without the loss of generality, let us discuss the
fundamental difference between single and multi-objective
optimization with a two-objective optimization problem.
For two conflicting objectives, each objective corresponds
to a different optimal solution.

In the above-mentioned decision-making problem of buying


a car (shown graphically) solutions 1 and 2 are these
optimal solutions.
If a buyer is willing to sacrifice cost to some extent from
solution 1, the buyer can probably find another car with a
better comfort level than this solution. Ideally, the extent of
sacrifice in cost is related to the gain in comfort.

Thus, we can visualize a set of optimal solutions (such as


solutions 1, 2, A, B and C)
Multi-Objective Optimization
The figure indicates that the cheapest car has a
hypothetical comfort level of 40%. To rich buyers for whom
comfort is the only objective of this decision-making, the
choice is solution 2 (with a hypothetical maximum comfort
level of 90%, as shown in the figure).

Between these two extreme solutions, there exist many


other solutions, where a trade-off between cost and comfort
exists.
A number of such solutions (solutions A, B, and C) with
differing costs and comfort levels are shown in the figure.
Thus, between any two such solutions, one is better in
terms of one objective, but this betterment comes only from
a sacrifice on the other objective.
Dominance
In the single-objective optimization problem, the superiority
of a solution over other solutions is easily determined by
comparing their objective function values.

In multi-objective optimization problem, the goodness of a


solution is determined by the dominance
Dominance Test
Definition
x1 dominates x2, if
1. Solution x1 is no worse than x2 in all objectives
2. Solution x1 is strictly better than x2 in at least one
objective
• x1 dominates x2 or x2 is dominated by x1

You might also like