0% found this document useful (0 votes)
68 views5 pages

Modified PSO 64 PDF

Uploaded by

Iwan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
68 views5 pages

Modified PSO 64 PDF

Uploaded by

Iwan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

Vol.1, No.

2, 151-155 (2009) Natural Science


http://dx.doi.org/10.4236/ns.2009.12019

A Modified Particle Swarm Optimization Algorithm


Ai-Qin Mu1,2, De-Xin Cao1, Xiao-Hua Wang2
1
College of Science, China University of Mining & Technology, XuZhou, China; [email protected], [email protected]
2
Foundation Departments, Xuzhou Air Force Academy, XuZhou, China

Received 17 August 2009; revised 28 August 2009; accepted 30 August 2009.

ABSTRACT control the impact on current particle by former parti-


cle’s velocity. PSO algorithm has preferred global
Particle Swarm Optimization (PSO) is a new searching ability when w is relatively large. On the con-
optimization algorithm, which is applied in trary, its local searching ability becomes better when
many fields widely. But the original PSO is w is smaller. Now the PSO algorithm with inertia
likely to cause the local optimization with weight factor was called standard PSO.
premature convergence phenomenon. By However, in PSO algorithm, particles would lost the
using the idea of simulated annealing algo- ability to explore new domains when they are searching
rithm, we propose a modified algorithm in solution space, that is to say it will entrap in local op-
which makes the most optimal particle of timization and causes the premature phenomenon.
Therefore, it is very import for PSO algorithm to be
every time of iteration evolving continu-
guaranteed to converge to the global optimal solution,
ously, and assign the worst particle with a and many modify PSO algorithms were researched in
new value to increase its disturbance. By recent ten years. For example, linearly decreasing inertia
the testing of three classic testing functions, weight technique was studied in [3].
we conclude the modified PSO algorithm In order to solve the premature phenomenon, many
has the better performance of convergence modified algorithms based on Simulated Annealing Al-
and global searching than the original PSO. gorithm are proposed. For example, the new location of
all particles is selected according to the probability [4, 5];
Keywords: PSO; Simulated Annealing Algorithm; the PSO and simulated annealing algorithm are iterated
Global Searching alternatively [6,7]; Gao Ying and Xie Shengli [8] add
hybridization and Gaussian mutation to alternative itera-
tions; in [9] particles are divided into two groups, PSO
1. INTRODUCTION and simulated annealing algorithm are iterated to them
respectively and then mixed two algorithms. This paper
PSO algorithm is a new intelligent optimization algo- proposed a new modify PSO algorithm. The arrange-
rithm intimating the bird swarm behaviors, which was ment of this paper is as follows. In section 2, the princi-
proposed by psychologist Kennedy and Dr. Eberhart in ple of standard PSO is introduced. In section 3, the
1995 [1]. Compared with other optimization algorithms, modified PSO algorithm is described. In section 4, three
the PSO is more objective and easily to perform well, it benchmark functions are used to evaluate the perform-
is applied in many fields such as the function optimiza- ance of algorithm, and the conclusions are given in sec-
tion, the neural network training, the fuzzy system con- tion 5.
trol, etc.
In PSO algorithm, each individual is called “particle”, 2. STANDARD PSO ALGORITHM
which represents a potential solution. The algorithm
achieves the best solution by the variability of some par-
Assuming X i  ( xi1 , xi 2 , , xiD ) is the position of i-th
ticles in the tracing space. The particles search in the
solution space following the best particle by changing particle in D-dimension, Vi  (vi1 , vi 2 , , viD ) is its ve-
their positions and the fitness frequently, the flying di- locity which represents its direction of searching. In it-
rection and velocity are determined by the objective eration process, each particle keeps the best position
function. pbest found by itself, besides, it also knows the best po-
For improving the convergence performance of PSO, sition gbest searched by the group particles, and changes
the inertia factor w is used by Shi and Eberhart [2] to its velocity according two best positions. The standard

Copyright © 2009 SciRes. OPEN ACCESS


152 A. Q. Mu et al. / Natural Science 1 (2009) 151-155

formula of PSO is as follow: iterating, initializes its position randomly for increasing
vid k 1
 wvid  c1r1 ( pid  xid )  c2 r2 ( pgd  xid )
k k k
(1) the chaos ability of particles. By this means, the particle
can search more domains. Secondly, by referring to ideas
xid k 1  xid k  vid k 1 (2) of the simulated annealing algorithm and using neigh-
borhoods to achieve the guaranteed convergence PSO in
In which: i  1, 2, N ; N-the population of the group [10], it is hoped that the fitness of the particle which has
particles; d  1, 2, , D ; k -the maximum number of the best value in last iteration would be smaller than last
iteration; r1 , r2 -the random values between [0,1], which times, and it is acceptable the fitness is worse in a lim-
are used to keep the diversity of the group particles; ited extent  . We calculate the change of fitness value of
c1 , c2 -the learning coefficients, also are called accelera- two positions f , and accept the new position if f is smaller
tion coefficients; vid k -the number d component of the than  . Otherwise, a new position is assigned to the par-
ticle randomly from its neighborhood with radius r.
velocity of particle i in k-th iterating; xid k -the number d The procedure of modified PSO is as following:
component of the position of particle i in k-th iterating; 1) Initialize the position and velocity of each particle;
pid -the number d component of the best position parti- 2) Calculate the fitness of each particle;
cle i has ever found; pgd -the number d component of 3) Concern the particle with the biggest fitness value,
reinitialize its position; and evaluate the particle with the
the best position the group particles have ever found.
smallest fitness value whether its new position is ac-
The procedure of standard PSO is as following:
ceptable, if the answer is yes, update its position, other-
1) Initialize the original position and velocity of parti-
wise, a new position is assigned to the particle randomly
cle swarm;
2) Calculate the fitness value of each particle; in its neighborhood with radius r; then renew the posi-
3) For each particle, compare the fitness value with tion and velocity of other particles according to For-
the fitness value of pbest, if current value is better, then mula (1) and (2);
renew the position with current position, and update the 4) For each particle, compare its current fitness value
fitness value simultaneously; with the fitness of its pbest, if the current value is better,
4) Determine the best particle of group with the best then update pbest and its fitness value;
fitness value, if the fitness value is better than the fitness 5) Determine the best particle of group with the best
value of gbest, then update the gbest and its fitness value fitness value, if the current fitness value is better than the
with the position; fitness value of gbest, then update the gbest and its fit-
5) Check the finalizing criterion, if it has been satis- ness value with the position;
fied, quit the iteration; otherwise, return to step 2). 6) Check the finalizing criterion, if it has been satis-
fied, quit the iteration; otherwise, return to step 3).
3. THE MODIFIED PSO
4. NUMERICAL SIMULATION
In standard PSO, because the particle has the ability to
know the best position of the group particles have been For investigating the modified PSO’s convergence and
searched, we need one particle to find the global best searching performance, three benchmark functions are
position rather than all particles to find it, and other par- used to compare with standard PSO in this section. The
ticles should search more domains to make sure the best basic information of three functions is described in Ta-
position is global best position not the local one. Based ble 1.
on these ideas, we propose some modifications with the Benchmark function 1 is non-linear single-peak func-
standard PSO algorithm. Firstly, the modified algorithm tion. It is relatively simple, and mainly used to test the
chooses the particle with maximum fitness when it is accuracy of searching optimization.

Table 1. Benchmark functions used in experiment.

expression minimum point optimal solution

F1  ( x1  x2 ) 2  (( x1  x2  10) / 3) 2 (5,5) 0

F2  100( x2  x12 ) 2  (1  x1 ) 2 (1,1) 0


2
F3  [ x
i 1
i
2
 10 cos(2xi )  10] (1,1) 0

Copyright © 2009 SciRes. OPEN ACCESS


A. Q. Mu et al. / Natural Science 1 (2009) 151-155 153

Table 2. Results of experiment.

Benchmark Total number of Mean of optimal Minimum of optimal Minimum times of


Algorithm
function iterations solution solution iteration

standard PSO 20795 4.3708e-6 6.9446e-9 130


F1
modified PSO 10595 2.1498e-6 1.26e-9 68
standard PSO 23836 1.7674e-4 1.9403e-8 205
F2
modified PSO 21989 8.8366e-6 2.52012e-8 350
standard PSO 24990 0.0667 8.9467e-8 237
F3
modified PSO 29611 7.7294e-6 9.5000e-9 853

Benchmark function 2 is typical pathological quad- Where wmax is the start of inertia weight which is set
ratic function which is difficult to be minimized. There to 0.9, and wmin , the end of inertia weight, is set to 0.05.
is a narrow valley between its global optimum and the
reachable local optimum, the chance of finding the itermax is the maximum times of iteration; k is the cur-
global optimal point is hardly. It is typically used to rent iteration times. In order to reflect the equity of ex-
evaluate the implementation of the performance optimi- periment, two algorithms all use the same original posi-
zation. tion and velocity randomly generated.
Base on sphere function, benchmark function 3 uses Parameter  which represents the acceptable lim-
cosine function to produce a mounts of local minimum, ited extent of the fitness value is set to 0.5 in modified
it is a typical complex multi-peak function which has the PSO. For using less parameter, the dynamic neighbor-
massive local optimal point. This function is very easy to hood is used and its radius is set to w . Each experiment
make the algorithm into a local optimum not the global is Executed 30 times, and takes their total iteration times
optimal solution. and mean optimal solution for comparing. Table 2 pre-
In experiment, the population of group particle is 40; sents the results of experiment.
c1 and c2 are set to 2; the maximum time of iteration is From Table 2, it is easy to find that the modified PSO
10000. It is acceptable if the difference between the best takes half time as standard PSO to achieve the best solu-
solution obtained by the optimization algorithm and the tion of function 1. Although the modified PSO has not
true solution is less then 1e-6. In standard PSO and remarkable improvement in convergence rate from func-
modified PSO, the inertia weight is linear decreasing tion 2, its mean optimal solution is better than standard
inertia all, which is determined by the following equa- PSO, which implies the modified PSO has the better
tion: performance in global searching, the conclusion is
wmax  wmin proved in function 3. Though the total number of itera-
w  wmax  k tion of standard PSO is less than modified PSO, but its
itermax
mean optimal solution is 0.0667, that indicate the rapid
convergence of standard PSO is built on running into
local optimal. On the contrary, the modified PSO can
jump from local optimal successfully, that enhances the
stability of algorithm greatly. Observe the results of
modified PSO concretely; it can be found that the worst
optimal solution of all iterations is 8.5442e-5, which
indicates the convergence rate is 100%. The details of 30
loops will no longer run them out.
For observing the movement of particles from bench-
mark function 3, the standard PSO and the modified
PSO are run again by giving the same position and ve-
locity of each particle initialized randomly. The result is
that standard PSO has iterated 800 times for the optimal
solution to be 1.3781e-6, and the modified PSO has iter-
ated 1216 times for the optimal solution to be 6.3346e-7.
A particle is randomly chosen to observe its trace. Fig-
Figure 1. Path of standard PSO’s particle. ure 1 and Figure 2 present the result.

Copyright © 2009 SciRes. OPEN ACCESS


154 A. Q. Mu et al. / Natural Science 1 (2009) 151-155

cles when their fitness would be worse in a limited ex-


tent  at the next iteration, otherwise, new positions
are assigned to the particles randomly from their
neighborhood with radius r.
Next, the total iteration times and mean optimal solu-
tion are compared between modified PSO and the sec-
ond improvement from three benchmark functions. The
parameters are set as following: the original velocity is 0;
other parameters are just same as former case. The ex-
periment is executed 100 times and same original set-
tings are assigned randomly. The results are shown in
Table 3. If the maximum and minimum velocity of par-
ticles are limited to 1 and -1, Table 4 shows the results.
From Table 3 and Table 4, it is obviously that both
modified PSO and the second improvement can jump
from local optimal convergence, which means they have
Figure 2. Path of modified PSO’s particle. the better global searching performance. For function 2
and function 3, the convergence rate of modified PSO is
From Figure 1 and Figure 2, it is easily to find out faster than the second improvement. It implied that al-
that the particle of standard PSO was vibrating nearby though the modified PSO do some modifications to two
the optimal position until converging at the optimal po- particles’ movement, the results is not worse than do the
sition, otherwise, the particle of modified PSO has same modifications to all particles’, sometimes, it has
searched more domains then jumped from the local op- the better convergence performance. For function 1,
timal solution, which ensured the algorithm to converge when the velocity of particle is not limited, the perform-
to the global optimal solution stably. ance of modified PSO can not compare with the second
Generally, the improvement of the modified PSO improvement, but they have the same performance when
based on simulated annealing algorithm is applied to all Vmax and Vmin are limited. If compare vertically, it can
particles. In order to compare the performance of algo- be found that whether the modified PSO or the second
rithms, we proposed the second improvement to all par- improvement have better convergence rate when the
ticles based on the ideas that mentioned before in this velocity of particles is limited. Especially, the second
article, the main idea is that it is acceptable for all parti- improvement has half times iteration of the modified

Table 3. Performance comparison between modified PSO and the second improvement without limited velocity.

benchmark function total iteration times mean optimal solution

modified PSO second improvement modified PSO second improvement

F1 33036 13063 4.3640e-6 1.5878e-6

F2 72438 86025 1.3661e-5 2.3398e-5

F3 93506 155573 1.1128e-5 2.7690e-4

Table 4. Performance comparison between modified PSO and the second improvement with limited velocity.

benchmark function total iteration times mean optimal solution

modified PSO second improvement modified PSO second improvement

F1 12381 12034 1.7625e-6 9.1580e-7


F2 37917 60139 8.1302e-6 9.1149e-5
F3 53453 131291 1.3804e-5 2.4989e-4

Copyright © 2009 SciRes. OPEN ACCESS


A. Q. Mu et al. / Natural Science 1 (2009) 151-155 155

PSO. This proved that it is important for PSO to limit the [3] Shi, Y. and Eberhart, R.C. (1999) Empirical study of
velocity of particles. particle swarm optimization. Proceedings of the 1999
Congress on Evolutionary Computation, 1945-1950.
[4] Wang, L.Z. (2006) Optimization of the solution to the
5. CONCLUSIONS problems simulated annealing to improve particle swarm
algorithm. Journal of Liuzhou Teachers College, 21(3),
In this paper, a modified PSO is proposed based on the 101-103.
simulated annealing algorithm. Through the results [5] Gao, S., Yang, J.Y., Wu, X.J. and Liu, T.M. (2005) Parti-
achieved in experiments, we can draw following conclu- cle swarm optimization based on the ideal of simulated
sions: annealing algorithm. Computer Applications and Soft-
1) The modified PSO has a better performance in sta- ware, 22(1), 103-104.
[6] Wang, Z.S., Li, L.C. and Li, B. (2008) Reactive power
bility and global convergence; it is the most important optimization based on particle swarm optimization and
conclusion. simulated annealing cooperative algorithm. Journal of
2) Although the modified PSO do some modifications Shandong University (Engineering Science), 38(6),15-20.
to two particles’ position and velocity, but its conver- [7] Wang, L.G., Hong, Y., Zhao, F.Q. and Yu, D.M. (2008) A
gence rate for the multi-peak function is much faster as hybrid algorithm of simulated annealing and particle
compared with the second improvement. swarm optimization. Computer Simulation, 25(11), 179-
3) In modified PSO, the maximum and minimum ve- 182.
locity of particles have obvious impact on the conver- [8] Gao, Y. and Xie, S.L. (2004) Particle swarm optimization
gence rate. How to choose the appropriate velocity limi- algorithms based on simulated annealing. Computer En-
gineering and Applications, 40(1), 47-50.
tation is the next step in our research. [9] Pan, Q.K., Wang, W.H. and Zhu, J.Y. (2006) Effective
hybrid heuristics based on particle swarm optimization
and simulated annealing algorithm for job shop schedul-
REFERENCES ing. Chinese Journal of Mechanical Engineering, 17(10),
1044-1046.
[1] Kennedy, J. and Eberhart, R.C. (1995) Particle swarm [10] Peer, E.S., Van den Bergh, F. and Engelbrecht A.P. (2003)
optimization. IEEE International Conference on Neural Using neighbourhoods with the guaranteed convergence
Network, 1942-1948. PSO. Proceeding of the IEEE Swarm Intelligence Sym-
[2] Shi, Y. and Eberhart, R.C. (1998) A modified particle posium, 235-242.
swarm optimizer. Proceedings of Congress on Evolu-
tionary Computation, 79-73.

Copyright © 2009 SciRes. OPEN ACCESS

You might also like