Particle Swarm Optimization
http://msdn.microsoft.com/en-us/magazine/msdnmag0811.aspx
ames !c"affre#
$ownload the "ode Sample
Particle swarm optimization (PSO) is an artificial intelligence (AI)
technique that can be used to find approximate solutions to extremely difficult or
impossible numeric maximization and minimization problems !he "ersion of
PSO I describe in this article was first presented in a #$$% research paper by &
'ennedy and ( )berhart PSO is loosely modeled on group beha"ior* such as bird
floc+ing and fish schooling !he best way for you to get a feel for what PSO is
and to see where I,m heading here is to examine %igure 1
!he first part of the figure describes a dummy problem being sol"ed by a
demonstration PSO program !he goal is to find "alues x-and x# so the "alue of the
function f . / 0 x-
1
0 x#
1
is minimized In the print edition of the article* I use the
notation 21 to indicate the squaring operation 3otice that I,"e deliberately chosen
an unrealistically simple problem to +eep the ideas of PSO clear It,s ob"ious that
the solution to this problem is x- . -- and x# . --* which yields a minimum
function "alue of /-* so using PSO isn,t really necessary I discuss more realistic
problems that can be sol"ed by PSO later in this article In this example* the
dimension of the function to minimize is 1 because we need to sol"e for 1
numeric "alues In general* PSO is well4suited to numeric problems with
dimensions of 1 or larger In most situations* PSO must ha"e some constraints on
the range of possible x "alues 5ere x- and x# are arbitrarily limited to the range
4#--- to #---
6igure # Particle Swarm Optimization $emo &un
!he next part of %igure 1 indicates that the PSO program is using #- particles and
that the program will iterate #*--- times As you,ll see shortly* each particle
represents a possible solution to the PSO problem being sol"ed PSO is an
iterati"e technique and in most cases it,s not possible to +now when an optimal
solution has been found !herefore* PSO algorithms usually must ha"e some limit
on the number of iterations to perform
!he next lines in %igure 1 indicate that each of the #- particles in the swarm is
initialized to a random position A particle,s position represents a possible solution
to the optimization problem to be sol"ed !he best randomly generated initial
position is x- . 17%/ and x# . 47-$* which corresponds to a fitness (the measure
of solution quality) of / 0 17%/
1
0 (47-$)
1
. 899#1 !he PSO algorithm then
enters a main processing loop where each particle,s position is updated on each
pass through the loop !he update procedure is the heart of PSO and I,ll explain it
in detail later in this article After #*--- iterations* the PSO algorithm did in fact
find the optimal solution of x- . -- and x# . --* but let me emphasize that in most
situations you won,t +now whether a PSO program has found an optimal solution
In this article* I,ll explain in detail the PSO algorithm and wal+ you line by line
through the program shown running in %igure 1 I coded the demo program in :;*
but you should be able to easily adapt the code presented here to another
language* such as <isual =asic 3)! or Python !he complete source code for the
program presented in this article is a"ailable at
msdnmicrosoftcom>magazine>msdnmag-?## !his article assumes you ha"e
intermediate coding s+ills with a modern procedural language but does not
assume you +now anything about PSO or related AI techniques
Particles
@hen using PSO* a possible solution to the numeric optimization problem under
in"estigation is represented by the position of a particle Additionally* each
particle has a current "elocity* which represents a magnitude and direction toward
a new* presumably better* solution>position A particle also has a measure of the
quality of its current position* the particle,s best +nown position (that is* a
pre"ious position with the best +nown quality)* and the quality of the best +nown
position I coded a Particle class as shown in %igure '
6igure 1 Particle $efinition
public class Particle
{
public double[] position;
public double fitness;
public double[] velocity;
public double[] bestPosition;
public double bestFitness;
public Particle(double[] position, double fitness,
double[] velocity, double[] bestPosition, double bestFitness)
{
this.position = new double[position.en!th];
position."opy#o(this.position, $);
this.fitness = fitness;
this.velocity = new double[velocity.en!th];
velocity."opy#o(this.velocity, $);
this.bestPosition = new double[bestPosition.en!th];
bestPosition."opy#o(this.bestPosition, $);
this.bestFitness = bestFitness;
%
public override strin! #o&trin!()
{
strin! s = '';
s (= '==========================)n';
s (= 'Position* ';
for (int i = $; i + this.position.en!th; ((i)
s (= this.position[i].#o&trin!('F,') ( ' ';
s (= ')n';
s (= 'Fitness = ' ( this.fitness.#o&trin!('F-') ( ')n';
s (= '.elocity* ';
for (int i = $; i + this.velocity.en!th; ((i)
s (= this.velocity[i].#o&trin!('F,') ( ' ';
s (= ')n';
s (= '/est Position* ';
for (int i = $; i + this.bestPosition.en!th; ((i)
s (= this.bestPosition[i].#o&trin!('F,') ( ' ';
s (= ')n';
s (= '/est Fitness = ' ( this.bestFitness.#o&trin!('F-') (
')n';
s (= '==========================)n';
return s;
%
% 00 class Particle
!he Particle class has fi"e public data membersA position* fitness* "elocity*
bestPosition and best6itness @hen using PSO* for simplicity I prefer using public
scope fields* but you may want to use pri"ate fields along with get and set
properties instead !he field named position is an array of type double and
represents a possible solution to the optimization problem under in"estigation
Although PSO can be used to sol"e non4numeric problems* it,s generally best4
suited for sol"ing numeric problems 6ield fitness is a measure of how good the
solution represented by position is 6or minimization problems* which are the
most common types of problems sol"ed by PSO* smaller "alues of the fitness field
are better than larger "aluesB for maximization problems* larger "alues of fitness
are better
6ield "elocity is an array of type double and represents the information necessary
to update a particle,s current position>solution I,ll explain particle "elocity in
detail shortly !he fourth and fifth fields in the Particle type are bestPosition and
best6itness !hese fields hold the best position>solution found by the Particle
obCect and the associated fitness of the best position
!he Particle class has a single constructor that accepts fi"e parameters that
correspond to each of the Particle,s fi"e data fields !he constructor simply copies
each parameter "alue to its corresponding data field =ecause all fi"e Particle
fields ha"e public scope* I could ha"e omitted the constructor and then Cust used
field assignment statements in the PSO code* but I thin+ the constructor leads to
cleaner code
!he Particle class definition contains a !oString method that echoes the "alues of
the fi"e data fields As with the constructor* because I declared the position*
fitness* "elocity* bestPosition and best6itness fields with public scope* I don,t
really need a !oString method to "iew a Particle obCect,s "alues* but including it
simplifies "iewing the fields and it,s useful for @riteDine4style debugging during
de"elopment In the !oString method I use string concatenation rather than the
more efficient String=uilder class to ma+e it easier for you to refactor my code to
a non4Eicrosoft 3)! 6ramewor+4based language if you wish
(he PSO )lgorithm
Although the heart of the PSO algorithm is rather simple* you,ll need to
understand it thoroughly in order to modify the code in this article to meet your
own needs PSO is an iterati"e process On each iteration in the PSO main
processing loop* each particle,s current "elocity is first updated based on the
particle,s current "elocity* the particle,s local information and global swarm
information !hen* each particle,s position is updated using the particle,s new
"elocity In math terms the two update equations areA
v(t0#) . (w F v(t)) 0 (c# F r# F (p(t) G x(t)) 0 (c1 F r1 F (g(t) G x(t))
x(t0#) . x(t) 0 v(t0#)
=ear with me hereB the position update process is actually much simpler than these
equations suggest !he first equation updates a particle,s "elocity !he term v(t0#)
means the "elocity at time t0# 3otice that v is in bold* indicating that "elocity is a
"ector "alue and has multiple components such as H#%%* 4-//I* rather than being
a single scalar "alue !he new "elocity depends on three terms !he first term is w
F v(t) !he w factor is called the inertia weight and is simply a constant li+e -8/
(more on this shortly)B v(t) is the current "elocity at time t !he second term is c# F
r# F (p(t) G x(t)) !he c# factor is a constant called the cogniti"e (or personal or
local) weight !he r# factor is a random "ariable in the range J-* #)* which is
greater than or equal to - and strictly less than # !he p(t) "ector "alue is the
particle,s best position found so far !he x(t) "ector "alue is the particle,s current
position !he third term in the "elocity update equation is (c1 F r1 F (g(t) G x(t))
!he c1 factor is a constant called the socialKor globalKweight !he r1 factor is a
random "ariable in the range J-* #) !he g(t) "ector "alue is the best +nown
position found by any particle in the swarm so far Once the new "elocity* v(t0#)*
has been determined* it,s used to compute the new particle position x(t0#)
A concrete example will help ma+e the update process clear Suppose you,re
trying to minimize / 0 x-
1
0 x#
1
as described in the introductory section of this
article !he function is plotted in %igure * !he base of the containing cube in
%igure * represents x- and x# "alues and the "ertical axis represents the function
"alue 3ote that the plot surface is minimized with f . / when x- . - and x# . -
6igure / Plot of f + * , x0
'
, x1
'
Det,s say that a particle,s current position* x(t)* is Hx-* x#I . H/-* 9-I* and that the
particle,s current "elocity* v(t)* is H4#-* 4#%I Det,s also assume that constant w .
-8* constant c# . #9* constant c1 . #9* and that random numbers r# and r1 are -%
and -7 respecti"ely 6inally* suppose the particle,s best +nown position is p(t) .
H1%* /7I and the global best +nown position by any particle in the swarm is g(t)
. H1/* /9I !hen the new "elocity and position "alues areA
v(t0#) . (-8 F H4#-*4#%I) 0
(#9 F -% F H1%* /7I 4 H/-* 9-I) 0
(#9 F -7 F H1/* /9I G H/-* 9-I)
. H4-8-* 4#-%I 0 H4-/%* 4-1?I 0 H4-%$* 4-%-I
. H4#79* 4#?/I
x(t0#) . H/-* 9-I 0 H4#79* 4#?/I
. H#/7* 1#8I
(ecall that the optimal solution is Hx-* x#I . (--* --I Obser"e that the update
process has impro"ed the old position>solution from (/-* 9-I to H#/7* 1#8I If
you mull o"er the update process a bit* you,ll see that the new "elocity is the old
"elocity (times a weight) plus a factor that depends on a particle,s best +nown
position* plus another factor that depends on the best +nown position from all
particles in the swarm !herefore* a particle,s new position tends to mo"e toward a
better position based on the particle,s best +nown position and the best +nown
position of all particles !he graph in %igure - shows the mo"ement of one of the
particles during the first eight iterations of the demo PSO run !he particle starts
at x- . #---* x# . ?-9 and tends to mo"e toward the optimal solution of x- . -* x#
. - !he spiral motion is typical of PSO
6igure 9 Particle !otion (oward Optimal Solution
.mplementing the PSO )lgorithm
%igure / presents the o"erall structure of the PSO program that produced the
program run shown in %igure 1 I used <isual Studio to create a :; console
application proCect named ParticleSwarmOptimization PSO code is fairly basic*
so any "ersion of the 3)! 6ramewor+ (## through 9) will wor+ well I remo"ed
all <isual Studio4generated using statements except for the reference to the core
System namespace I declared a class4scope obCect of type (andom to generate the
cogniti"e and social random numbers described in the pre"ious section I also
used the (andom obCect to generate random initial "elocities and positions for
each Particle obCect Inside the Eain method I wrap all my code in a single* high4
le"el try statement to catch any exceptions
6igure % PSO Program Structure
usin! &yste1;
na1espace Particle&war12pti1i3ation
{
class Pro!ra1
{
static 4ando1 ran = null;
static void 5ain(strin![] ar!s)
{
try
{
"onsole.6riteine(')n/e!in P&2 de1o)n');
ran = new 4ando1($);
int nu1berParticles = 7$;
int nu1ber8terations = 7$$$;
int iteration = $;
int 9i1 = ,; 00 di1ensions
double 1in: = ;7$$.$;
double 1a<: = 7$$.$;
Particle[] swar1 = new Particle[nu1berParticles];
double[] best=lobalPosition = new double[9i1];
double best=lobalFitness = double.5a<.alue;
double 1in. = ;7.$ > 1a<:;
double 1a<. = 1a<:;
00 8nitiali3e all Particle ob?ects
double w = $.@,A; 00 inertia wei!ht
double c7 = 7.-A--B; 00 co!nitive wei!ht
double c, = 7.-A--B; 00 social wei!ht
double r7, r,; 00 rando1i3ations
00 5ain processin! loop
00 9isplay results
"onsole.6riteine(')nCnd P&2 de1o)n');
%
catch (C<ception e<)
{
"onsole.6riteine('Fatal error* ' ( e<.5essa!e);
%
% 00 5ain()
static double 2b?ectiveFunction(double[] <)
{
return D.$ ( (<[$] > <[$]) ( (<[7] > <[7]);
%
% 00 class Pro!ra1
public class Particle
{
00 9efinition here
%
% 00 ns
After instantiating the (andom obCect with an arbitrary seed "alue of -* I initialize
some +ey PSO "ariablesA
int nu1berParticles = 7$;
int nu1ber8terations = 7$$$;
int iteration = $;
int 9i1 = ,;
double 1in: = ;7$$.$;
double 1a<: = 7$$.$;
I use #- Particle obCects As a rule of thumb* more Particle obCects are better than
fewer* but more can significantly slow program performance I set the number of
main processing loop iterations to #*--- !he number of iterations you,ll want to
use will depend on the complexity of the problem you,re trying to optimize and
the processing power of your host machine !ypically* PSO programs use a "alue
between #*--- and #--*--- !he "ariable named iteration is a counter to +eep
trac+ of the number of main loop iterations !he Lim "ariable holds the number of
x "alues in a solution>position =ecause my example problem needs to find the
"alues of x0 and x1 that minimize / 0 x0
1
0 x1
1
* I set Lim to 1 As I mentioned
earlier* in most PSO situations you,ll want to limit the x "alues that ma+e up the
position>solution "ector to some problem4dependent range @ithout some limits*
you,re effecti"ely searching from doubleEin<alue to doubleEax<alue 5ere I
arbitrarily limit x- and x# to J4#---* 0#---M
3ext* I prepare to instantiate the particle swarmA
Particle[] swar1 = new Particle[nu1berParticles];
double[] best=lobalPosition = new double[9i1];
double best=lobalFitness = double.5a<.alue;
double 1in. = ;7.$ > 1a<:;
double 1a<. = 1a<:;
I create an array of Particle obCects named swarm I also set up an array to hold
the global best +nown position determined by any ParticleKdenoted by g(t) in the
algorithmKand the corresponding fitness of that position array I set constraints
for the maximum and minimum "alues for a new "elocity !he idea here is that
because a new "elocity determines a particle,s new position* I don,t want the
magnitude of any of the "elocity components to be huge
!he code to initialize the swarm is as followsA
for (int i = $; i + swar1.en!th; ((i)
{
double[] rando1Position = new double[9i1];
for (int ? = $; ? + rando1Position.en!th; ((?) {
double lo = 1in:;
double hi = 1a<:;
rando1Position[?] = (hi ; lo) > ran.Ee<t9ouble() ( lo;
%
...
I iterate through each Particle obCect in the array named swarm I declare an array
of size Lim to hold a random position for the current Particle !hen for each x4
"alue of the position I generate a random "alue between minN (4#---) and maxN
(0#---) In many realistic PSO problems* the range for each x4"alue will be
different* so you,ll ha"e to add code to deal with each x4"alue in the position array
separately
3ow I continue the initialization processA
double fitness = 2b?ectiveFunction(rando1Position);
double[] rando1.elocity = new double[9i1];
for (int ? = $; ? + rando1.elocity.en!th; ((?) {
double lo = ;7.$ > 5ath.Fbs(1a<: ; 1in:);
double hi = 5ath.Fbs(1a<: ; 1in:);
rando1.elocity[?] = (hi ; lo) > ran.Ee<t9ouble() ( lo;
%
swar1[i] = new Particle(rando1Position, fitness, rando1.elocity,
rando1Position, fitness);
...
6irst I compute the quality of the current random position array by passing that
array to the method ObCecti"e6unction If you refer bac+ to %igure /* you,ll see
that the ObCecti"e6unction method simply computes the "alue of the function I,m
trying to minimize* namely / 0 x-
1
0 x#
1
3ext I compute a random "elocity for the
current Particle obCect After I ha"e a random position* the fitness of the random
position and a random "elocity* I pass those "alues to the Particle constructor
(ecall that the fourth and fifth parameters are the particle,s best +nown position
and its associated fitness* so when initializing a Particle the initial random
position and fitness are the best +nown "alues
!he swarm initialization code finishes withA
...
if (swar1[i].fitness + best=lobalFitness) {
best=lobalFitness = swar1[i].fitness;
swar1[i].position."opy#o(best=lobalPosition, $);
%
% 00 Cnd initiali3ation loop
I chec+ to see if the fitness of the current Particle is the best (smallest in the case
of a minimization problem) fitness found so far If so* I update array
bestOlobalPosition and the corresponding "ariable bestOlobal6itness
3ext* I prepare to enter the main PSO processing loopA
double w = $.@,A; 00 inertia wei!ht
double c7 = 7.-A--B; 00 co!nitive wei!ht
double c, = 7.-A--B; 00 social wei!ht
double r7, r,; 00 rando1i3ers
I set the "alue for w* the inertia weight* to -81$ !his "alue was recommended by
a research paper that in"estigated the effects of "arious PSO parameter "alues on a
set of benchmar+ minimization problems Instead of a single* constant "alue for
w* an alternati"e approach is to "ary the "alue of w 6or example* if your PSO
algorithm is set to iterate #-*--- times* you could initially set w to -$- and
gradually decrease w to -9- by reducing w by -#- after e"ery 1*--- iterations
!he idea of a dynamic w is that early in the algorithm you want to explore larger
changes in position* but later on you want smaller particle mo"ements I set the
"alues for both c# and c1* the cogniti"e and social weights* to #9$99% Again* this
"alue was recommended by a research study If you set the "alue of c# to be larger
than the "alue of c1* you place more weight on a particle,s best +nown position
than on the swarm,s global best +nown position* and "ice "ersa !he random
"ariables r# and r1 add a random component to the PSO algorithm and help pre"ent
the algorithm from getting stuc+ at a non4optimal local minimum or maximum
solution
3ext* I begin the main PSO processing loopA
for (int i = $; i + swar1.en!th; ((i)
{
Particle currP= swar1[i];
for (int ? = $; ? + currP.velocity.en!th; ((?)
{
r7 = ran.Ee<t9ouble();
r, = ran.Ee<t9ouble();
new.elocity[?] = (w > currP.velocity[?]) (
(c7 > r7> (currP.bestPosition[?] ; currP.position[?])) (
(c, > r, > (best=lobalPosition[?] ; currP.position[?]));
...
I iterate through each Particle obCect in the swarm array using i as an index
"ariable I create a reference to the current Particle obCect named currP to simplify
my code* but I could ha"e used swarmJiM directly As explained in the pre"ious
section* the first step is to update each particle,s "elocity "ector 6or the current
Particle obCect* I wal+ through each one of the "alues in the obCect,s "elocity
array* generate random "ariables r# and r1* and then update each "elocity
component as explained in the pre"ious section
After I compute a new "elocity component for the current Particle obCect* I chec+
to see if that component is between the minimum and maximum "alues for a
"elocity componentA
if (new.elocity[?] + 1in.)
new.elocity[?] = 1in.;
else if (new.elocity[?] G 1a<.)
new.elocity[?] = 1a<.;
% 00 each ?
new.elocity."opy#o(currP.velocity, $);
...
If the component is out of range* I bring it bac+ in range !he idea here is that I
don,t want extreme "alues for the "elocity component because extreme "alues
could cause my new position to spin out of bounds After all "elocity components
ha"e been computed* I update the current Particle obCect,s "elocity array using the
handy 3)! :opy!o method
Once the "elocity of the current Particle has been determined* I can use the new
"elocity to compute and update the current Particle,s positionA
for (int ? = $; ? + currP.position.en!th; ((?)
{
newPosition[?] = currP.position[?] ( new.elocity[?];
if (newPosition[?] + 1in:)
newPosition[?] = 1in:;
else if (newPosition[?] G 1a<:)
newPosition[?] = 1a<:;
%
newPosition."opy#o(currP.position, $);
...
Again I perform a range chec+* this time on each of the current particle,s new
position components In a sense* this is a redundant chec+ because I,"e already
constrained the "alue of each "elocity component* but in my opinion the extra
chec+ is warranted here
3ow that I ha"e the current Particle obCect,s new position* I compute the new
fitness "alue and update the obCect,s fitness fieldA
newFitness = 2b?ectiveFunction(newPosition);
currP.fitness = newFitness;
if (newFitness + currP.bestFitness) {
newPosition."opy#o(currP.bestPosition, $);
currP.bestFitness = newFitness;
%
if (newFitness + best=lobalFitness) {
newPosition."opy#o(best=lobalPosition, $);
best=lobalFitness = newFitness;
%
% 00 each Particle
% 00 1ain P&2 loop
...
After updating the current particle* I chec+ to see if the new position is the best
+nown position of the particleB I also chec+ to see if the new position is a best
global swarm position 3otice that logically* there can be a new global best
position only if there,s a best local position* so I could ha"e nested the global best
chec+ inside the chec+ for a local best position
At this point my main PSO algorithm loop is finished and I can display my
resultsA
"onsole.6riteine(')nProcessin! co1plete');
"onsole.6rite('Final best fitness = ');
"onsole.6riteine(best=lobalFitness.#o&trin!('F-'));
"onsole.6riteine('/est position0solution*');
for (int i = $; i + best=lobalPosition.en!th; ((i){
"onsole.6rite('<' ( i ( ' = ' );
"onsole.6riteine(best=lobalPosition[i].#o&trin!('F-') ( '
');
%
"onsole.6riteine('');
"onsole.6riteine(')nCnd P&2 de1onstration)n');
%
catch (C<ception e<)
{
"onsole.6riteine('Fatal error* ' ( e<.5essa!e);
%
% 00 5ain()
0xtending and !odif#ing
3ow that you,"e seen how to write a basic PSO* let,s discuss how you can extend
and modify the code I,"e presented !he example problem I sol"ed is artificial in
the sense that there,s no need to use PSO to find an approximate solution because
the problem can be sol"ed exactly @here PSO is really useful is when the
numeric problem under in"estigation is extremely difficult or impossible to sol"e
using standard techniques :onsider the following problem Pou want to predict
the score of an (American) football game between teams A and = Pou ha"e
historical data consisting of the pre"ious results of A and = against other teams
Pou mathematically model the historical rating of a team N in such a way that if
the team wins a game* the team,s rating goes up by some fixed "alue (say #7
points) plus another "alue that depends on the difference between the teams,
ratings (say --9 times the difference if the team N rating is less than the opposing
team,s) 6urthermore* you model the predicted margin of "ictory of a team as
some function of the difference in team ratingsB for example* if team N is rated
#*81- and team P is rated #*71-* your model predicts a margin of "ictory for N of
/% points In short* you ha"e a large amount of data and need to determine se"eral
numeric "alues (such as the #7 and the --9) that minimize your prediction errors
!his data4dri"en parameter estimation is the type of problem that,s right up PSO,s
alley
PSO is Cust one of se"eral AI techniques based on the beha"ior of natural systems
Perhaps the technique closest to PSO algorithms is Oenetic Algorithms (OAs)
=oth techniques are well4suited to difficult numeric problems OAs ha"e been
extensi"ely studied for decades An ad"antage of PSOs o"er OAs is that PSO
algorithms are significantly simpler to implement than OAs It,s not clear at this
time whether PSOs are more or less effecti"e than OAs* or roughly equal to them
!he "ersion of PSO I,"e presented here can be modified in many ways One
particularly interesting modification is to use se"eral sub4swarms of particles
rather than one global swarm @ith such a design* each particle belongs to a sub4
swarm and the new "elocity of a particle could depend on four terms rather than
threeA the old "elocity* the particle,s best +nown position* the best +nown position
of any particle in the sub4swarm* and the best +nown position of any particle !he
idea of this sub4swarm design is to reduce the chances of the PSO algorithm
getting stuc+ in a non4optimal solution !o the best of my +nowledge such a
design has not yet been thoroughly in"estigated
$r. ames !c"affre# works for Volt Information Sciences Inc., where he
manages technical training for software engineers working at the Microsoft
Redmond, Wash., campus. e!s worked on se"eral Microsoft products, including
Internet #xplorer and MS$ Search. %r. Mc&affre' is the author of (.$#) )est
*utomation Recipes+ ,*press, -00./, and can 0e reached
at 1ammc2microsoft.com.
)hanks to the following Microsoft technical experts for re"iewing this article3
Paul Koch, Dan Liebling, Anne Loomis Thompson and Shane Williams