Cohort Intelligence: A Socio-Inspired Optimization Method 1st Edition Anand Jayant Kulkarni All Chapters Instant Download
Cohort Intelligence: A Socio-Inspired Optimization Method 1st Edition Anand Jayant Kulkarni All Chapters Instant Download
com
https://textbookfull.com/product/cohort-intelligence-a-
socio-inspired-optimization-method-1st-edition-anand-jayant-
kulkarni/
OR CLICK BUTTON
DOWNLOAD NOW
https://textbookfull.com/product/socio-cultural-inspired-
metaheuristics-anand-j-kulkarni/
textboxfull.com
https://textbookfull.com/product/bio-inspired-algorithms-in-pid-
controller-optimization-first-edition-ashour/
textboxfull.com
https://textbookfull.com/product/nature-inspired-computing-and-
optimization-theory-and-applications-1st-edition-srikanta-patnaik/
textboxfull.com
Nature-Inspired Algorithms and Applied Optimization 1st
Edition Xin-She Yang (Eds.)
https://textbookfull.com/product/nature-inspired-algorithms-and-
applied-optimization-1st-edition-xin-she-yang-eds/
textboxfull.com
https://textbookfull.com/product/nature-inspired-optimization-
techniques-for-image-processing-applications-jude-hemanth/
textboxfull.com
https://textbookfull.com/product/combustion-optimization-based-on-
computational-intelligence-1st-edition-hao-zhou/
textboxfull.com
Cohort
Intelligence: A
Socio-inspired
Optimization
Method
Intelligent Systems Reference Library
Volume 114
Series editors
Janusz Kacprzyk, Polish Academy of Sciences, Warsaw, Poland
e-mail: [email protected]
Lakhmi C. Jain, University of Canberra, Canberra, Australia;
Bournemouth University, UK;
KES International, UK
e-mail: [email protected]; [email protected]
URL: http://www.kesinternational.org/organisation.php
About this Series
The aim of this series is to publish a Reference Library, including novel advances
and developments in all aspects of Intelligent Systems in an easily accessible and
well structured form. The series includes reference works, handbooks, compendia,
textbooks, well-structured monographs, dictionaries, and encyclopedias. It contains
well integrated knowledge and current information in the field of Intelligent
Systems. The series covers the theory, applications, and design methods of
Intelligent Systems. Virtually all disciplines such as engineering, computer science,
avionics, business, e-commerce, environment, healthcare, physics and life science
are included.
Ajith Abraham
Cohort Intelligence:
A Socio-inspired
Optimization Method
123
Anand Jayant Kulkarni Ganesh Krishnasamy
Odette School of Business Department of Electrical Engineering,
University of Windsor Faculty of Engineering
Windsor, ON Universiti Malaya
Canada Kuala Lumpur
Malaysia
and
Ajith Abraham
Department of Mechanical Engineering, Machine Intelligence Research Labs
Symbiosis Institute of Technology (MIR Labs)
Symbiosis International University Scientific Network for Innovation
Pune, Maharashtra and Research Excellence
India Auburn, WA
USA
This book is written for engineers, scientists, and students studying/working in the
optimization, artificial intelligence (AI), or computational intelligence arena. The
book discusses the core and underlying principles and analysis of the different
concepts associated with an emerging socio-inspired AI optimization tool referred
to as cohort intelligence (CI).
The book in detail discusses the CI methodology as well as several modifications
for solving a variety of problems. The validation of the methodology is also pro-
vided by solving several unconstrained test problems. In order to make CI solve
real-world problems which are inherently constrained, CI method with a penalty
function approach is tested on several constrained test problems and comparison
of the performance is also discussed. The book also demonstrates the ability of
CI methodology for solving several cases of the combinatorial problems such as
traveling salesman problem (TSP) and knapsack problem (KP). In addition,
real-world applications of the CI methodology by solving complex and large-sized
combinatorial problems from the healthcare, inventory, supply chain optimization,
and cross-border transportation domain is also discussed. The inherent ability of
handling constraints based on the probability distribution is also revealed and
proved using these problems. A detailed mathematical formulation, solutions, and
comparisons are provided in every chapter. Moreover, the detailed discussion on
the CI methodology modifications for solving several problems from the machine
learning domain is also provided.
The mathematical level in all the chapters is well within the grasp of the sci-
entists as well as the undergraduate and graduate students from the engineering and
computer science streams. The reader is encouraged to have basic knowledge of
probability and mathematical analysis. In presenting the CI and associated modi-
fications and contributions, the emphasis is placed on the development of the
fundamental results from basic concepts. Numerous examples/problems are worked
out in the text to illustrate the discussion. These illustrative examples may allow the
reader to gain further insight into the associated concepts. The various algorithms
for solving have been coded in MATLAB software. All the executable codes are
available online at www.sites.google.com/site/oatresearch/cohort-intelligence.
vii
viii Preface
1 Introduction to Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 What Is Optimization? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1.1 General Problem Statement . . . . . . . . . . . . . . . . . . . . . . . . 2
1.1.2 Active/Inactive/Violated Constraints . . . . . . . . . . . . . . . . . 3
1.1.3 Global and Local Minimum Points . . . . . . . . . . . . . . . . . . 3
1.2 Contemporary Optimization Approaches. . . . . . . . . . . . . . . . . . . . 4
1.3 Socio-Inspired Optimization Domain . . . . . . . . . . . . . . . . . . . . . . 6
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 Socio-Inspired Optimization Using Cohort Intelligence . . . . . . . . . . 9
2.1 Framework of Cohort Intelligence . . . . . . . . . . . . . . . . . . . . . . . . 10
2.2 Theoretical Comparison with Contemporary Techniques . . . . . . . 13
2.3 Validation of Cohort Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . 14
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3 Cohort Intelligence for Constrained Test Problems . . . . . . . . . . . . . 25
3.1 Constraint Handling Using Penalty Function Approach . . . . . . . . 25
3.2 Numerical Experiments and Discussion . . . . . . . . . . . . . . . . . . . . 26
3.3 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4 Modified Cohort Intelligence for Solving Machine
Learning Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.2 The Clustering Problem and K-Means Algorithm . . . . . . . . . . . . . 41
4.3 Modified Cohort Intelligence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.4 Hybrid K-MCI and Its Application for Clustering . . . . . . . . . . . . 43
4.5 Experiment Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
ix
x Contents
4.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5 Solution to 0–1 Knapsack Problem Using Cohort
Intelligence Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
5.1 Knapsack Problem Using CI Method . . . . . . . . . . . . . . . . . . . . . . 55
5.1.1 Illustration of CI Solving 0–1 KP . . . . . . . . . . . . . . . . . . . 56
5.2 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.3 Conclusions and Future Directions . . . . . . . . . . . . . . . . . . . . . . . . 70
5.4 Test Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
6 Cohort Intelligence for Solving Travelling Salesman Problems . . . . 75
6.1 Traveling Salesman Problem (TSP) . . . . . . . . . . . . . . . . . . . . . . . 76
6.1.1 Solution to TSP Using CI . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.2 Results and Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
6.3 Concluding Remarks and Future Directions . . . . . . . . . . . . . . . . . 85
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
7 Solution to a New Variant of the Assignment Problem
Using Cohort Intelligence Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . 87
7.1 New Variant of the Assignment Problem . . . . . . . . . . . . . . . . . . . 87
7.2 Probable Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
7.2.1 Application in Healthcare . . . . . . . . . . . . . . . . . . . . . . . . . 90
7.2.2 Application in Supply Chain Management . . . . . . . . . . . . 91
7.3 Cohort Intelligence (CI) Algorithm for Solving the CBAP . . . . . . 91
7.3.1 A Sample Illustration of the CI Algorithm
for Solving the CBAP . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
7.3.2 Numerical Experiments and Results . . . . . . . . . . . . . . . . . 94
7.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
8 Solution to Sea Cargo Mix (SCM) Problem Using Cohort
Intelligence Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 101
8.1 Sea Cargo Mix Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... 102
8.2 Cohort Intelligence for Solving Sea Cargo Mix (SCM)
Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
8.3 Numerical Experiments and Results . . . . . . . . . . . . . . . . . . . . . . . 106
8.4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
9 Solution to the Selection of Cross-Border Shippers (SCBS)
Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
9.1 Selection of Cross-Border Shippers (SCBS) Problem . . . . . . . . . . 118
9.1.1 Single Period Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
9.1.2 Multi Period Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
Contents xi
For almost all the human activities there is a desire to deliver the most with the
least. For example in the business point of view maximum profit is desired from
least investment; maximum number of crop yield is desired with minimum
investment on fertilizers; maximizing the strength, longevity, efficiency, utilization
with minimum initial investment and operational cost of various household as well
as industrial equipments and machineries. To set a record in a race, for example, the
aim is to do the fastest (shortest time).
The concept of optimization has great significance in both human affairs and the
laws of nature which is the inherent characteristic to achieve the best or most
favorable (minimum or maximum) from a given situation [1]. In addition, as the
element of design is present in all fields of human activity, all aspects of opti-
mization can be viewed and studied as design optimization without any loss of
generality. This makes it clear that the study of design optimization can help not
only in the human activity of creating optimum design of products, processes and
systems, but also in the understanding and analysis of mathematical/physical
phenomenon and in the solution of mathematical problems. The constraints are
inherent part if the real world problems and they have to be satisfied to ensure the
acceptability of the solution. There are always numerous requirements and con-
straints imposed on the designs of components, products, processes or systems in
real-life engineering practice, just as in all other fields of design activity. Therefore,
creating a feasible design under all these diverse requirements/constraints is already
a difficult task, and to ensure that the feasible design created is also ‘the best’ is
even more difficult.
All the optimal design problems can be expressed in a standard general form stated
as follows:
xli xi xui
• where xli and xui are the lower and upper limits of xi , respectively. However,
these side constraints can be easily converted into the normal inequality con-
straints (by splitting them into 2 inequality constraints).
• Although all optimal design problems can be expressed in the above standard
form, some categories of problems may be expressed in alternative specialized
forms for greater convenience and efficiency.
1.1 What Is Optimization? 3
The constraints in an optimal design problem restrict the entire design space into
smaller subset known as the feasible region, i.e. not every point in the design space
is feasible. See Fig. 1.1.
• An inequality constraint gj ðXÞ is said to be violated at the point x if it is not
satisfied there gj ðXÞ 0 .
• If gj ðXÞ is strictly satisfied gj ðXÞ\0 then it is said to be inactive at x.
• If gj ðXÞ is satisfied at equality gj ðXÞ ¼ 0 then it is said to be active at x.
• The set of points at which an inequality constraint is active forms a constraint
boundary which separates the feasibility region of points from the infeasible
region.
• Based
on the
above definitions,
equality
constraints can only be either violated
hj ðXÞ 6¼ 0 or active hj ðXÞ ¼ 0 at any point x.
• The set of points where an equality constraint is active forms a sort of boundary
both sides of which are infeasible.
Let the set of design variables that give rise to a minimum of the objective function
f ðXÞ be denoted by X (the asterisk is used to indicate quantities and terms
referring to an optimum point). An objective GðXÞ is at its global (or absolute)
minimum at the point X if:
x2
x2
xc xa xa
xb
xb
0
)=
(x
g3
x1 x1
g 2(
0 h1 (x)
= =0
x)
x)
1(
=0
f(x)
global maximum
local maximum
local minimum
a local maximum b x
constraint boundary
constraint boundary
local minimum
global minimum
The objective has a local (or relative) minimum at the point X if:
A graphical representation of these concepts is shown in Fig. 1.2 for the case of
a single variable x over a closed feasible region a x b.
There are several mathematical optimization techniques being practiced so far, for
example gradient methods, Integer Programming, Branch and Bound, Simplex
algorithm, dynamic programming, etc. These techniques can efficiently solve the
problems with limited size. Also, they could be more applicable to solve linear
problems. In addition, as the number of variables and constraints increase, the
computational time to solve the problem, may increase exponentially. This may
limit their applicability. Furthermore, as the complexity of the problem domain is
increasing solving such complex problems using the mathematical optimization
techniques is becoming more and more cumbersome. In addition, certain heuristics
have been developed to solve specific problem with certain size. Such heuristics
have very limited flexibility to solve different class of problems.
In past few years a number of nature-/bio-inspired optimization techniques (also
referred to as metaheuristics) such as Evolutionary Algorithms (EAs), Swarm
Intelligence (SI), etc. have been developed. The EA such as Genetic Algorithm
(GA) works on the principle of Darwinian theory of survival of the fittest individual
1.2 Contemporary Optimization Approaches 5
in the population. The population is evolved using the operators such as selection,
crossover, mutation, etc. According to Deb [2] and Ray et al. [3], GA can often
reach very close to the global optimal solution and necessitates local improvement
techniques to incorporate into it. Similar to GA, mutation driven approach of
Differential Evolution (DE) was proposed by Storn and Price [4] which helps
explore and further locally exploit the solution space to reach the global optimum.
Although, easy to implement, there are several problem dependent parameters
required to be tuned and may also require several associated trials to be performed.
Inspired from social behavior of living organisms such as insects, fishes, etc.
which can communicate with one another either directly or indirectly the paradigm
of SI is a decentralized self organizing optimization approach. These algorithms
work on the cooperating behavior of the organisms rather than competition amongst
them. In SI, every individual evolves itself by sharing the information from others
in the society. The techniques such as Particle Swarm Optimization (PSO) is
inspired from the social behavior of bird flocking and school of fish searching for
food [4]. The fishes or birds are considered as particles in the solution space
searching for the local as well as global optimum points. The directions of
movements of these particles are decided by the best particle in the neighborhood
and the best particle in entire swarm. The Ant Colony Optimization (ACO) works
on the ants’ social behavior of foraging food following a shortest path [5]. The ant is
considered as an agent of the colony. It searches for the better solution in its close
neighborhood and iteratively updates its solution. The ants also updates their
pheromone trails at the end of every iteration. This helps every ant decide their
directions which may further self organize them to reach to the global optimum.
Similar to ACO, the Bee Algorithm (BA) also works on the social behavior of
honey bees finding the food; however, the bee colony tends to optimize the use of
number of members involved in particular pre-decided tasks [6]. The Bees
Algorithm is a population-based search algorithm proposed by Pham et al. [7] in a
technical report presented at the Cardiff University, UK. It basically mimics the
food foraging behavior of honey bees. According to Pham and Castellani [8] and
Pham et al. [7], Bees Algorithm mimics the foraging strategy of honey bees which
look for the best solution. Each candidate solution is thought of as a flower or a
food source, and a population or colony of n bees is used to search the problem
solution space. Each time an artificial bee visits a solution, it evaluates its objective
solution. Even though it has been proven to be effective solving continuous as well
as combinatorial problems Pham and Castellani [8, 9], some measure of the
topological distance between the solutions is required. The Firefly Algorithm
(FA) is an emerging metaheuristic swarm optimization technique based on the
natural behavior of fireflies. The natural behavior of fireflies is based on biolumi-
nescence phenomenon [10, 11]. They produce short and rhythmic flashes to
communicate with other fireflies and attract potential prey. The light
intensity/brightness
I of the flash at a distance r obeys inverse square law, i.e.
I / 1 r2 in addition to the light absorption by surrounding air. This makes most of
6 1 Introduction to Optimization
the fireflies visible only till a limited distance, usually several hundred meters at
night, which is enough to communicate. The flashing light of fireflies can be for-
mulated in such a way that it is associated with the objective function to be opti-
mized, which makes it possible to formulate optimization algorithms [10, 11].
Similar to the other metaheuristic algorithms constraint handling is one of crucial
issues being addressed by researchers [12].
References
1. Kulkarni, A.J., Tai, K., Abraham, A.: Probability collectives: a distributed multi-agent system
approach for optimization. In: Intelligent Systems Reference Library, vol. 86. Springer, Berlin
(2015) (doi:10.1007/978-3-319-16000-9, ISBN: 978-3-319-15999-7)
2. Deb, K.: An efficient constraint handling method for genetic algorithms. Comput. Methods
Appl. Mech. Eng. 186, 311–338 (2000)
3. Ray, T., Tai, K., Seow, K.C.: Multiobjective design optimization by an evolutionary
algorithm. Eng. Optim. 33(4), 399–424 (2001)
4. Storn, R., Price, K.: Differential evolution—a simple and efficient heuristic for global
optimization over continuous spaces. J. Global Optim. 11, 341–359 (1997)
5. Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proceedings of IEEE International
Conference on Neural Networks, pp. 1942–1948 (1995)
6. Dorigo, M., Birattari, M., Stitzle, T.: Ant colony optimization: artificial ants as a
computational intelligence technique. IEEE Comput. Intell. Mag., 28–39 (2006)
7. Pham, D.T., Ghanbarzadeh, A., Koc, E., Otri, S., Rahim, S., Zaidi, M.: The bees algorithm.
Technical Note, Manufacturing Engineering Centre, Cardiff University, UK (2005)
8. Pham, D.T., Castellani, M.: The bees algorithm—modelling foraging behaviour to solve
continuous optimisation problems. Proc. ImechE, Part C, 223(12), 2919–2938 (2009)
9. Pham, D.T., Castellani, M.: Benchmarking and comparison of nature-inspired
population-based continuous optimisation algorithms. Soft Comput. 1–33 (2013)
10. Yang, X.S.: Firefly algorithms for multimodal optimization. In: Stochastic Algorithms:
Foundations and Applications. Lecture Notes in Computer Sciences 5792, pp. 169–178.
Springer, Berlin (2009)
11. Yang, X.S., Hosseini, S.S.S., Gandomi, A.H.: Firefly Algorithm for solving non-convex
economic dispatch problems with valve loading effect. Appl. Soft Comput. 12(3), 1180–1186
(2002)
12. Deshpande, A.M., Phatnani, G.M., Kulkarni, A.J.: Constraint handling in firefly algorithm. In:
Proceedings of IEEE International Conference on Cybernetics, pp. 186–190 (2013)
Chapter 2
Socio-Inspired Optimization Using Cohort
Intelligence
learning attempts the individual behavior of all the candidates does not improve
considerably and candidates’ behaviors become hard to distinguish. The cohort
could be assumed to become successful when for a considerable number of times
the cohort behavior saturates to the same behavior.
This chapter discusses the CI methodology framework in detail and further
validates its ability by solving a variety of unconstrained test problems. This
demonstrates its strong potential of being applicable for solving unimodal as well as
multimodal problems.
START
N Cohort behavior
saturated?
Y
N
Convergence ?
STOP
1=f ðxc Þ
pc ¼ PC ; ðc ¼ 1; . . .; CÞ ð2:2Þ
c
c¼1 1=f ðx Þ
where Wi ¼ ðkWi kÞ r.
Step 4. Each candidate c ðc ¼ 1; . . .; CÞ samples t qualities from within the
c½?
updated sampling interval Wi ; i ¼ 1; . . .; N associated with every
c½?
variable xi n ; i ¼ 1; . . .; N and computes
o a set of associated t behaviors,
i.e. Fc;t
¼ f ðx Þ ; . . .; f ðxc Þ j ; . . .; f ðxc Þt , and selects the best function
c 1
f ðxc Þ from within. This makes the cohort is available withC updated
behaviors represented as FC ¼ f ðx1 Þ; . . .; f ðxc Þ; . . .; f ðxC Þ .
Step 5. The cohort behavior could be considered saturated, if there is no
significant improvement in the behavior f ðxc Þ of every candidate
c ðc ¼ 1; . . .; CÞ in the cohort, and the difference between the individual
behaviors is not very significant for successive considerable number of
learning attempts, i.e. if
n n1
1. max FC max FC e, and
C n C n1
2. min F min F e, and
2.1 Framework of Cohort Intelligence 13
n n
3. max FC min FC e, every candidate c ðc ¼ 1; . . .; CÞ
c½?
expands the sampling interval Wi ; i ¼ 1; . . .; N associated with
c½ ?
every quality xi ; i ¼ 1; . . .; N to its original one
upper
Wlower
i x i Wi ; i ¼ 1; . . .; N.
Step 6. If either of the two criteria listed below is valid, accept any of the C
behaviors from current set of behaviors in the cohort as the final objective
function value f ðxÞ as the final solution and stop, else continue to Step 1.
(a) If maximum number of attempts exceeded.
(b) If cohort saturates to the same behavior (satisfying the conditions in
Step 5) for smax times.
the number of agents that previously chose the same path [14]. However it is
difficult to solve continuous optimization problem using ACO directly, as there are
limitations in number of choices for ants at each stage. Some recent research has
extended classical ACO to solve continuous optimization problem.
In CI, however, the collective effort of the swarm is replaced by the competitive
nature of a cohort. Every candidate tries to follow the behavior of a candidate that
has shown better results in that particular iteration. In following this behavior, it
tries to incorporate some of the qualities that made that behavior successful. This
competitive behavior motivates each candidate to perform better, and leads to an
eventual improvement in the behaviors of all the candidates. This technique differs
from PSO and barebones PSO in that it does not check merely the best solution, but
is fully informed of the activities of its fellow candidates and follows a candidate
selected using the roulette wheel approach. However, it is also different from the
FIPSO which keeps track of the entire swarm in that it follows the behavior of only
one candidate, not a resultant of the results presented by the entire swarm. CI also
differs from GA, as there is no direct exchange of certain properties or even
mutation. Rather, candidates decide to follow a fellow candidate, and try to imbibe
the qualities that led that candidate to reach its solution. The values of these
qualities are not replicated exactly. They are instead taken from a close neighbor-
hood of the values of the qualities of the candidate being followed. This gives a
variation in the solutions obtained and this is how the cohort can avoid getting
trapped in local minima. CI differs from ACO as the autocatalytic nature of the ants
is replaced by competitive nature of the cohorts. Instead of having tendency to
follow most followed behavior, candidates in CI try to incorporate the best behavior
in every iteration. This prevents the algorithm from getting caught into local
minima by not relying on the behavior that is locally optimal. The CI algorithm has
shown itself to be comparable to the best results obtained from the various tech-
niques. The sharing of the best solution among candidates in CI gives a direction for
all the candidates to move towards, but the independent search of each candidate
ensures that the candidates come out of local minima to get the best solutions.
every unconstrained test problem are listed in Table 2.6. These parameters were
derived empirically over numerous experiments.
The CI performance solving a variety of unconstrained test problems is pre-
sented in Tables 2.1, 2.2, 2.3, 2.4 and 2.5 with increase in the number of variables
16 2 Socio-Inspired Optimization Using Cohort Intelligence
associated with the individual problem. It is observed that with increase in number
of variables, the computational cost, i.e. function evaluations and computational
time was increased. However, the small standard deviation values for all the
2.3 Validation of Cohort Intelligence 17
functions. The effect is visible in Fig. 2.3 where effect of these parameters on
Sphere function and Ackley Function is presented as a representative to unimodal
and multimodal function, respectively. The visualization for the convergence of the
2.3 Validation of Cohort Intelligence 19
representative Ackley function is presented in Fig. 2.2 for learning attempts 1, 10,
15 and 30. For both types of functions, the computational cost, i.e. function eval-
uations and computational time was observed to be increasing linearly with
increasing number of candidates C (refer to Fig. 2.3a, b) as well as number of
variations in behavior t (refer to Fig. 2.3e, f, k and l). This was because, with
increase in number of candidates, number of behavior choices i.e. function evalu-
ations also increased. Moreover, with fewer number of candidates C, the quality of
the solution at the end of first learning attempt referred to as initial solution as well
as the converged solution were quite close to each other and importantly, the
converged solutions and the converged solution was suboptimal. The quality of
both the solutions improved with increase in number of candidates C. This was
because fewer number of behavior choices were available with fewer number of
candidates, whereas with increase in number of candidates the total choice of
behavior also increased as a result initial solution worsened whereas converged
solution improved as sufficient time was provided for saturation. Due to this a
widening gap between initial solution and converged solution was observed in the
Exploring the Variety of Random
Documents with Different Content
Hanger, Colonel George (afterwards Lord Coleraine), 192 and
note 141
Harcourt, Countess of, 50 note 39, 358
— Earl of, 67, 75
Hardwicke, Lord, 338
Hardy, Professor, 238
Harte, Rev. Walter, 32 and note 31, 33, 35-43, 49
Harvey, Fenton, 20
Hastings, Warren, 164
Hawke, Lord, 241
Hawkesbury, Lady, 267
Hawksworth, Dr., 343
Héritier, C. de l’, 367 and note
Hertford, Marquis of, 427
Hervey, General, 105, 106
— Lady Mary, 267 and note
Hesketh, Lady, 233
Hill, Rev. Rowland, 359
Hoar, Captain, 350, 352
Holdernesse, Lord, 56
Holroyd, John Baker (afterwards Lord Sheffield, q.v.), 58
Hoole, Mrs., 189, 198, 246, 250, 252
— John, 189
— Rev. Samuel, 189, 246, 251, 252, 361, 362
Horsley, Bishop, 359 and note 204
Howard, Sir Charles, 28
— John, 59, 60, 248
Howlett, Rev. —, 97, 285
Hunter, Dr. Alexander, 61 and note 53
Hutchinson, Rev. Mr., 336
Huthhausen, Baron, 173
Ingoldsby, Dr., 19
— General, 4, 5, 6, 7
— Mrs., 6, 10
Lafayette, 191
Lamb, Mr. (King’s Messenger), 45
Lambert, Captain, 29
Langford, Dr., 142
Latrobe, B. H., 172
Lauderdale, Lord, 314, 397
Law, Thomas, 229
Lawrence, Dr., 345
Lazowski, M. de, 119 and note 91, 120-124, 154, 175
Leeds, Duke of, 51
— Sir George, 472
Leigh, Mr. (Clerk of the House of Commons), 261, 262
Liancourt, Duke of, 119 and note 90, 120-123, 154, 259, 382
Liverpool, Lord, 339
Llandaff, Bishop of. See Watson, Richard
Lofft, Capel, 101 and note 75, 102, 276, 291, 317, 318
Longford, Lord, 71
Loughborough, Lord, 86, 98, 207, 208, 219
Louisa, Princess (daughter of George II.), 16
Luther, Mr., 179
Yeldham, John, 47
York, Duke of, 16, 266, 327
York, Mrs., 245
Young, Rev. Dr. (father of Arthur Young, writer of the
Autobiography), 2-6, 8, 9, 10, 13, 14, 24
— Mrs. (mother of Arthur Young), 3, 22, 24, 28, 56, 57, 61, 77,
81, 126
— Arthur (son of Arthur Young), 51, 139, 143, 299, 317, 318,
323, 352, 363, 382, 402, 403, 406, 408, 415, 418, 428, 429,
432, 448, 456, 457, 464
— — (last descendant of Arthur Young), 127 note 94
— Mrs. Arthur (wife of Arthur Young), 32 and note 29, 46, 81,
142, 146, 319, 339, 389, 413, 424, 429, 438, 457, 460
— — — (daughter-in-law of Arthur Young), 323, 397, 398, 399,
404, 409, 415, 429, 446, 448, 457
— Bartholomew (grandfather of Arthur Young), 2
— Elizabeth (‘Bessy’) (second daughter of Arthur Young), 51,
146, 181, 189. See also Hoole, Mrs.
— Elizabeth Mary (‘Elisa Maria’) (sister of Arthur Young), 1, 15,
19, 20. See also Tomlinson, Mrs.
— Rev. Dr. John (brother of Arthur Young), 2, 9, 57, 107, 127,
132, 138-143
— Martha Ann (‘Bobbin’) (youngest daughter of Arthur Young),
110 and note 83, 158, 159 note 118, 184, 185, 263-284, 286,
287, 290, 294, 295, 298, 323, 382, 423
— Mary (eldest daughter of Arthur Young), 43, 184, 382, 425,
440, 454 note 244, 457, 472
PRINTED BY
SPOTTISWOODE AND CO., NEW-STREET SQUARE
LONDON
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
textbookfull.com