
1
Vol.:(0123456789)
Scientic Reports | (2024) 14:5032
|
https://doi.org/10.1038/s41598-024-54910-3
www.nature.com/scientificreports
Hippopotamus optimization
algorithm: a novel nature‑inspired
optimization algorithm
Mohammad Hussein Amiri
1
, Nastaran Mehrabi Hashjin
1*
, Mohsen Montazeri
1
,
Seyedali Mirjalili
2,4
& Nima Khodadadi
3
The novelty of this article lies in introducing a novel stochastic technique named the Hippopotamus
Optimization (HO) algorithm. The HO is conceived by drawing inspiration from the inherent behaviors
observed in hippopotamuses, showcasing an innovative approach in metaheuristic methodology.
The HO is conceptually dened using a trinary‑phase model that incorporates their position
updating in rivers or ponds, defensive strategies against predators, and evasion methods, which
are mathematically formulated. It attained the top rank in 115 out of 161 benchmark functions
in nding optimal value, encompassing unimodal and high‑dimensional multimodal functions,
xed‑dimensional multimodal functions, as well as the CEC 2019 test suite and CEC 2014 test suite
dimensions of 10, 30, 50, and 100 and Zigzag Pattern benchmark functions, this suggests that the HO
demonstrates a noteworthy prociency in both exploitation and exploration. Moreover, it eectively
balances exploration and exploitation, supporting the search process. In light of the results from
addressing four distinct engineering design challenges, the HO has eectively achieved the most
ecient resolution while concurrently upholding adherence to the designated constraints. The
performance evaluation of the HO algorithm encompasses various aspects, including a comparison
with WOA, GWO, SSA, PSO, SCA, FA, GOA, TLBO, MFO, and IWO recognized as the most extensively
researched metaheuristics, AOA as recently developed algorithms, and CMA‑ES as high‑performance
optimizers acknowledged for their success in the IEEE CEC competition. According to the statistical
post hoc analysis, the HO algorithm is determined to be signicantly superior to the investigated
algorithms. The source codes of the HO algorithm are publicly available at https:// www. mathw orks.
com/ matla bcent ral/ lee xchan ge/ 160088‑ hippo potam us‑ optim izati on‑ algor ithm‑ ho.
Abbreviations
MaxIter Max number of iterations
BF Benchmark Function
UM Unimodal
MM Multimodal
FM Fixed-dimension Multimodal
HM High-dimensional Multimodal
ZP Zigzag Pattern benchmark test
TCS Tension/Compression Spring
WB Welded Beam
PV Pressure Vessel
WFLO Wind Farm Layout Optimization
F Function
CEC IEEE Congress on Evolutionary Computation
D Dimension
C19 CEC2019
C14 CEC2014
CMA-ES Evolution Strategy with Covariance Matrix Adaptation
OPEN
1
Faculty of Electrical Engineering, Shahid Beheshti University, Tehran, Iran.
2
Centre for Articial Intelligence
Research and Optimization, Torrens University Australia, Adelaide, Australia.
3
Department of Civil and
Architectural Engineering, University of Miami, Coral Gables, FL, USA.
4
Research and Innovation Center, Obuda
University, Budapest 1034, Hungary.
*
email: [email protected]

2
Vol:.(1234567890)
Scientic Reports
| (2024) 14:5032 |
https://doi.org/10.1038/s41598-024-54910-3
www.nature.com/scientificreports/
MFO Moth-ame Optimization
AOA Arithmetic Optimization Algorithm
TLBO Teaching-Learning-Based Optimization
IWO Invasive Weed Optimization
GOA Grasshopper Optimization Algorithm
FA Firey Algorithm
PSO Particle Swarm Optimization
SSA Salp Swarm Algorithm
CD Critical Dierence
GWO Gray Wolf Optimization
SCA Sine Cosine Algorithm
WOA Whale Optimization Algorithm
Best e best result
Worst e worst result
Std. Standard Deviation
Mean Average best result
Numerous issues and challenges in today’s science, industry, and technology can be dened as optimization
problems. All optimization problems have three parts: an objective function, constraints, and decision variables
1
.
Optimization algorithms can be categorized in diverse manners for addressing such problems. Nonetheless,
one prevalent classication method is based on its inherent approach to optimizing problems, distinguishing
between stochastic and deterministic algorithms
2
. Unlike stochastic methods, deterministic methods require
more extensive information about the problem
3
. However, stochastic methods do not guarantee nding a global
optimal solution. In today’s context, optimization problems we oen encounter are nonlinear, complex, non-dif-
ferentiable, piecewise functions, non-convex, and involve many decision variables
4
. For such problems, employing
stochastic methods for their solution tends to be more straightforward and more suitable, especially when we
have limited information about the problem or intend to treat it as a black box
5
.
One of the most important and widely used methods in stochastic approaches is metaheuristic algorithms.
In metaheuristic algorithms, feasible initial solution candidates are randomly generated. en, iteratively, these
initial solutions are updated according to the specied relationships in the metaheuristic algorithm. In each step,
feasible solutions with better costs are retained based on the number of search agents. is updating continues
until the stopping iteration is satised, typically achieving a MaxIter such as the Number of Function Evalua-
tions (NFE) or reaching a predened cost value set by the user for the cost function. Because of the advantages
of metaheuristic algorithms, they are used in various applications, and the results show that these algorithms
can improve eciency in these applications. A good optimization algorithm is able to create a balance between
exploration and exploitation, in the sense that in exploration, attention is paid to global search, and in exploita-
tion, attention is paid to local search around the obtained answers
6
.
Numerous optimization algorithms have been introduced; however, introducing and developing a new, highly
innovative algorithm are still deemed necessary, as per the No Free Lunch (NFL) theorem
7
. e NFL theorem
asserts that the superior performance of a metaheuristic algorithm in solving specic optimization problems does
not guarantee similar success in solving dierent problems. erefore, the need for an algorithm that demon-
strates improved speed of convergence and the ability to nd the optimal solution compared to other algorithms
is highlighted. e broad scope of utilizing metaheuristic optimization algorithms has garnered attention from
researchers across multiple disciplines and domains. Metaheuristic optimization algorithms nd applications in a
wide range of engineering disciplines, including medical engineering problems, such as improving classication
accuracy by adjusting hyperparameters using metaheuristic optimization algorithms and adjusting weights in
neural networks
8
or fuzzy systems
9
.
Similarly, these algorithms contribute to intelligent fault diagnosis and tuning controller coecients
10
in
control and mechanical engineering. In telecommunication engineering, they aid in identifying digital lters
11
,
while in energy engineering, they assist in tasks such as modeling solar panels
12
, optimizing their placement,
and even wind turbine placement
13
. In civil engineering, metaheuristic optimization algorithms are utilized for
structural optimization
14
, while in the eld of economics, they enhance stock portfolio optimization
15
. Addition-
ally, metaheuristic optimization algorithms play a role in optimizing thermal systems in chemical engineering
16
,
among other applications.
e distinctive contributions of this research lie in developing a novel metaheuristic algorithm termed theHO,
rooted in the emulation ofHippopotamuses’behaviors in the natural environment. e primary achievements
of this study work can be outlined as follows:
•
e design of HO is inuenced by the intrinsic behaviors observed in hippopotamuses, such as their position
update in the river or pond, defence tactics against predators, and methods of evading predators.
•
HO is mathematically formulated through a three-phase model comprising their position update, defence,
and evading predators.
•
To evaluate the eectiveness of the HO in solving optimization problems, it undergoes testing on a set of 161
standard BFs of various types of UM, MM, ZP benchmark test, the CEC 2019, the CEC 2014 dimensions of
10, 30, 50, and 100 to investigate the eect of the dimensions of the problem on the performance of the HO
algorithm
•
e performance of the HO is evaluated by comparing it with the performance of twelve widely well-kown
metaheuristic algorithms.

3
Vol.:(0123456789)
Scientic Reports
| (2024) 14:5032 |
https://doi.org/10.1038/s41598-024-54910-3
www.nature.com/scientificreports/
•
e eectiveness of the HO in real-world applications is tested through its application to tackle four engineer-
ing design challenges.
e article is structured into ve sections. e “Literature review’’ section focuses on related work, while
the “Hippopotamus Optimization Algorithm” section covers the HO approach introduced, modelled, and HO’s
limitations. e “Simulation results and comparison” section presents simulation results and compares the
performance of the dierent algorithms. e performance of HO in solving classical engineering problems is
studied in the “Hippopotamus optimization algorithm for engineering problems” section, and “Conclusions and
future works” section provides conclusions based on the article’s ndings.
Literature review
As mentioned in the introduction, it should be noted that optimization algorithms are not conned to a singular
discipline or specialized research area. is is primarily because numerous real-world problems possess intricate
attributes, including nonlinearity, non-dierentiability, discontinuity, and non-convexity. Given these complexi-
ties and uncertainties, stochastic optimization algorithms demonstrate enhanced versatility and a heightened
capacity to address such challenges eectively. Consequently, they exhibit a more remarkable ability to accom-
modate and navigate the intricacies and uncertainties inherent in these problems. Optimization algorithms
oen draw inspiration from natural phenomena, aiming to model and simulate natural processes. Physical laws,
chemical reactions, animal behavior patterns, social behavior of animals, biological evolution, game theory
principles, and human behavior have received signicant attention in this regard. ese natural phenomena
serve as valuable sources of inspiration for developing optimization algorithms, oering insights into ecient
and practical problem-solving strategies.
Optimization algorithms can be classied from multiple perspectives. In terms of objectives, they can be
grouped into three categories: single-objective, multi-objective, and many-objective algorithms
17
. From the
standpoint of decision variables, algorithms can be characterized as either continuous or discrete (or binary).
Furthermore, they can be subdivided into constrained and unconstrained optimization algorithms, depending
on whether constraints are imposed on the decision variables. Such classications provide a framework for
understanding and categorizing optimization algorithms based on dierent criteria. From another perspective,
optimization algorithms can be categorized based on their sources of inspiration. ese sources can be classi-
ed into six main categories: evolutionary algorithms, physics or chemistry-based algorithms, swarm-based
algorithms, human-inspired algorithms, mathematic-based algorithms, and game theory-inspired algorithms.
While the rst four categories are well-established and widely recognized, the mathematic-based and game
theory-inspired categories may need to be more known.
Optimization algorithms that draw inspiration from swarm-based are commonly utilized to model the col-
lective behavior observed in animals, plants, and insects. For instance, the American Zebra Optimization Algo-
rithm (ZOA)
18
. e inspiration for ZOA comes from the foraging behavior of zebras and their defensive behav-
ior against predators during foraging. Similarly, the inspiration for Northern Goshawk Optimization (NGO)
19
comes from the hunting behavior of the Northern Goshawk. Among the notable algorithms in this category
are Particle Swarm Optimization (PSO)
20
, Ant Colony Optimization (ACO)
21
, and Articial Bee Colony (ABC)
algorithm
22
, Tunicate Swarm Algorithm (TSA)
23
, Beluga Whale Optimization (BWO)
24
, Aphid–Ant Mutual-
ism (AAM)
25
, articial Jellysh Search (JS)
26
, Spotted Hyena Optimizer (SHO)
27
, Honey Badger Algorithm
(HBA)
28
, Mantis Search Algorithm (MSA)
29
, Nutcraker Optimization Algorithm (NOA)
30
, Manta Ray Foraging
Optimization (MRFO)
31
, Orca Predation Algorithm (OPA)
32
, Yellow Saddle Goatsh (YSG)
33
, Hermit Crab
Optimization Algorithm (HCOA)
34
, Cheetah Optimizer (CO)
35
, Walrus Optimization Algorithm (WaOA)
36
,
Red-Tailed Hawk algorithm (RTH)
37
, Barnacles Mating Optimizer (BMO)
38
, Meerkat Optimization Algorithm
(MOA)
39
, Snake Optimizer (SO)
40
, Grasshopper Optimization Algorithm (GOA)
41
, Social Spider Optimization
(SSO)
42
, Whale Optimization Algorithm (WOA)
43
, Ant Lion Optimizer (ALO)
44
, Grey Wolf Optimizer (GWO)
45
,
Marine Predators Algorithm (MPA)
46
,Aquila Optimizer (AO)
47
, Mountain Gazelle Optimizer (MGO)
48
, Articial
Hummingbird Algorithm (AHA)
49
,African Vultures Optimization Algorithm (AVOA)
50
, Bonobo Optimizer
(BO)
51
, Salp Swarm Algorithm (SSA)
52
, Harris Hawks Optimizer (HHO)
53
, Colony Predation Algorithm (CPA)
54
,
Adaptive Fox Optimization (AFO)
55
, Slime Mould Algorithm (SMA)
3
, Spider Wasp Optimization (SWO)
56
, Arti-
cial Gorilla Troops Optimizer (GTO)
57
, Krill Herd Optimization (KH)
58
, Alpine Skiing Optimization (ASO)
59
,
Shued Frog-Leaping Algorithm (SFLA)
60
, Firey Algorithms (FA)
61
, Komodo Mlipir Algorithm (KMA)
62
,
Prairie Dog Optimization (PDO)
63
, Tasmanian Devil Optimization (TDO)
64
, Reptile Search Algorithm (RSA)
65
,
Border Collie Optimization (BCO)
66
, Cuckoo Optimization Algorithm (COA)
67
and Moth-ame optimization
algorithm (MFO)
68
are novel optimization algorithm that has been introduced in recent years. ey belong to
the category of swarm-based optimization algorithms. ese algorithms encapsulate the principles of swarm
intelligence, oering eective strategies for solving optimization problems by emulating the cooperative and
adaptive behaviors found in natural swarms.
Another category of optimization algorithms is based on the origin of inspiration from biological evolu-
tion, genetics, and natural selection. e genetic optimization algorithm (GA)
69
is one of the most well-known
algorithms in this category. Among the notable algorithms in this category are Memetic Algorithm (MA)
70
, Dif-
ferential Evolution (DE)
71
Evolution Strategies (ES)
72
Biogeography-Based Optimization (BBO)
73
, Liver Cancer
Algorithm (LCA)
74
, Genetic Programming (GP)
75
, Invasive Weed Optimization algorithm (IWO)
76
, Electric
Eel Foraging Optimization (EEFO)
77
, Greylag Goose Optimization (GGO)
78
, and Puma Optimizer (PO)
79
. e
Competitive Swarm Optimizer (CSO)
80
is craed explicitly for handling large-scale optimization challenges,
taking inspiration from PSO while introducing a unique conceptual approach. In CSO, the adjustment of particle
positions deviates from the inclusion of personal best positions or global best positions. Instead, it employs a

4
Vol:.(1234567890)
Scientic Reports
| (2024) 14:5032 |
https://doi.org/10.1038/s41598-024-54910-3
www.nature.com/scientificreports/
pairwise competition mechanism, allowing the losing particle to learn from the winner and adjust its position
accordingly. e Falcon Optimization Algorithm (FOA)
81
is inspired by the hunting behavior of falcons. e
Barnacles Mating Optimizer (BMO)
82
algorithm takes inspiration from the mating behavior observed in barna-
cles in their natural habitat. e Pathnder Algorithm (PFA)
83
is tailored to address optimization problems with
diverse structures. Drawing inspiration from the collective movement observed in animal groups and the hierar-
chical leadership within swarms, PFA seeks to discover optimal solutions akin to identifying food areas or prey.
Optimization algorithms are based on the origin of physical or chemical laws. As the name of this category
suggests, the concepts are inspired by physical laws, chemical reactions, or chemical laws. Some of the algorithms
in this category include Simulated Annealing (SA)
84
, Snow Ablation Optimizer (SAO)
85
, Electromagnetic Field
Optimization (EFO)
86
, Light Spectrum Optimization (LSO)
87
, String eory Algorithm (STA)
88
, Harmony Search
(HS)
89
, Multi-Verse Optimizer (MVO)
90
, Black Hole Algorithm (BH)
91
, Gravitational Search Algorithm (GSA)
92
,
Articial Electric Field Algorithm (AEFA)
93
draws inspiration from the principles of Coulomb’s law governing
electrostatic force. Magnetic Optimization Algorithm (MOA)
94
, Chemical Reaction Optimization (CRO)
95
, Atom
Search Optimization (ASO)
96
, Henry Gas Solubility Optimization (HGSO)
97
, Nuclear Reaction Optimization
(NRO)
98
, Chernobyl Disaster Optimizer (CDO)
99
, ermal Exchange Optimization (TEO)
100
, Turbulent Flow of
Water-based Optimization (TFWO)
101
, Water Cycle Algorithm (WCA)
102
, Equilibrium Optimizer (EO)
103
,Lévy
Flight Distribution (LFD)
104
, and Crystal Structure Algorithm (CryStAl)
105
which takes inspiration from the
symmetric arrangement of constituents in crystalline minerals like quartz.
Human-inspired algorithms derive inspiration from the social behavior, learning processes, and communi-
cation patterns found within human society. Some of the algorithms in this category include Driving Training-
Based Optimization (DTBO)
106
, Fans Optimization (FO)
107
, Mother Optimization Algorithm (MOA)
108
, Moun-
taineering Team-Based Optimization (MTBO)
109
, Human Behavior-Based Optimization (HBBO)
110
, Chef-Based
Optimization Algorithm (CBOA)
111
is the process of acquiring culinary expertise through training programs.
Teaching–Learning-Based Optimization (TLBO)
112
, Political Optimizer (PO)
113
, In the War Strategy Optimiza-
tion (WSO)
114
optimization algorithm, two human strategies during war, attack and defence, are modelled. EVo-
lutive Election Based Optimization (EVEBO)
115
, Distance-Fitness Learning (DFL)
116
, and Cultural Algorithms
(CA)
117
. Supply–Demand-Based Optimization (SDO)
118
is inspired by the economic supply–demand mechanism
and is craed to emulate the dynamic interplay between consumers’ demand and producers’ supply. e Search
and Rescue Optimization Algorithm (SAR)
119
takes inspiration from the exploration behavior observed during
search and rescue operations conducted by humans. e Student Psychology Based Optimization (SPBO)
120
algorithm draws inspiration from the psychology of students who aim to enhance their exam performance and
achieve the top position in their class. e Poor and Rich Optimization (PRO)
121
algorithm is inspired by the
dynamics between the eorts of poor and rich individuals to improve their economic situations. e algorithm
mirrors the behavior of both the rich, who seek to widen the wealth gap, and the poor, who endeavor to accu-
mulate wealth and narrow the gap with the auent.
Game-based optimization algorithms oen model the rules of a game. Some of the algorithms in this category
include Squid Game Optimizer (SGO)
122
, Puzzle Optimization Algorithm (POA)
123
, and Darts Game Optimizer
(DGO)
124
.
Mathematical theories inspire mathematical algorithms. For example, Arithmetic Optimization Algorithm
(AOA)
125
,the Chaos Game Optimization (CGO)
126
is inspired by chaos theory and fractal conguration prin-
ciples. Another known algorithm in this category are Sine Cosine Algorithm (SCA)
127
, Evolution Strategy with
Covariance Matrix Adaptation (CMA-ES)
128
, and Quadratic Interpolation Optimization (QIO).
Hippopotamus optimization algorithm
In this section, we articulate the foundational inspiration and theoretical underpinnings of the proposed HO
Algorithm.
Hippopotamus
e hippopotamus is one of the fascinating creatures residing in Africa
129
. is animal falls under the clas-
sication of vertebrates and specically belongs to the group of mammals within the vertebrate category
130
.
Hippopotamuses are semi-aquatic organisms that predominantly occupy their time in aquatic environments,
specically rivers and ponds, as part of their habitat
131,132
. Hippopotamuses exhibit a social behavior wherein they
reside in collective units referred to as pods or bloats, typically comprising a population ranging from 10 to 30
individuals
133
. Determining the gender of hippopotamuses is not easily accomplished as their sexual organs are
not external, and the only distinguishing factor lies in the dierence in their weight. Adult hippopotamuses can
stay submerged underwater for up to 5 min. is species of animal, in terms of appearance, bears resemblance to
venomous mammals such as the shrew, but its closest relatives are whales and dolphins, with whom they shared
a common ancestor around 55 million years ago
134
.
Despite their herbivorous nature and reliance on a diet consisting mainly of grass, branches, leaves, reeds,
owers, stems, and plant husks
135
, hippopotamuses display inquisitiveness and actively explore alternative food
sources. Biologists believe that consuming meat can cause digestive issues in hippopotamuses. ese animals
possess extremely powerful jaws, aggressive temperament, and territorial behavior, which has classied them
as one of the most dangerous mammals in the world
136
. e weight of male hippopotamuses can reach up to
9,920 pounds, while females typically weigh around 3,000 pounds. ey consume approximately 75 pounds
of food daily. Hippopotamuses engage in frequent conicts with one another, and occasionally, during these
confrontations, one or multiple hippopotamus calves may sustain injuries or even perish. Due to their large size
and formidable strength, predators generally do not attempt to hunt or attack adult hippopotamuses. However,

5
Vol.:(0123456789)
Scientic Reports
| (2024) 14:5032 |
https://doi.org/10.1038/s41598-024-54910-3
www.nature.com/scientificreports/
young hippopotamuses or weakened adult individuals become vulnerable prey for Nile crocodiles, lions, and
spotted hyenas
134
.
When attacked by predators, hippopotamuses exhibit a defensive behavior by rotating towards the assailant
and opening their powerful jaws. is is accompanied by emitting a loud vocalization, reaching approximately
115 decibels, which instils fear and intimidation in the predator, oen deterring them from pursuing such a risky
prey. When the defensive approach of a hippopotamus proves ineective or when the hippopotamus is not yet
suciently strong, it retreats rapidly at speeds of approximately 30 km/h to distance itself from the threat. In
most cases, it moves towards nearby water bodies such as ponds or rivers
136
.
Inspiration
e HO draws inspiration from three prominent behavioral patterns observed in the life of hippopotamuses.
Hippopotamus groups are comprised of several female hippopotamuses, hippopotamus calves, multiple adult
male hippopotamuses, and a dominant male hippopotamus (the leader of the herd)
136
. Due to their inherent
curiosity, young and calves hippopotamuses oen display a tendency to wander away from the group. As a con-
sequence, they may become isolated and become targets for predators.
e secondary behavioral pattern of hippopotamuses is defensive in nature, triggered when they are under
attack by predators or when other creatures intrude into their territory. Hippopotamuses exhibit a defensive
response by rotating themselves toward the predator and employing their formidable jaws and vocalizations
to deter and repel the attacker (Fig.1). Predators such as lions and spotted hyenas possess an awareness of this
phenomenon and actively seek to avoid direct exposure to the formidable jaws of a hippopotamus as a precaution-
ary measure against potential injuries. e nal behavioral pattern encompasses the hippopotamus’ instinctual
response of eeing from predators and actively seeking to distance itself from areas of potential danger. In such
circumstances, the hippopotamus strives to navigate toward the closest body of water, such as a river or pond,
as lions and spotted hyenas frequently exhibit aversion to entering aquatic environments.
Mathematical modelling of HO
e HO is a population-based optimization algorithm, in which search agents are hippopotamuses. In the HO
algorithm, hippopotamuses are candidate solutions for the optimization problem, meaning that the position
update of each hippopotamus in the search space represents values for the decision variables. us, each hip-
popotamus is represented as a vector, and the population of hippopotamuses is mathematically characterized
by a matrix. Similar to conventional optimization algorithms, the initialization stage of the HO involves the
generation of randomized initial solutions. During this step, the vector of decision variables is generated using
the following formula:
where
χ
i
represents the position of the
i
th candidate solution,
r
is a random number in the range of 0 to 1, and
lb
and
ub
denote the lower and upper bounds of the
j
th decision variable, respectively. Given that
N
denotes
the population size of hippopotamuses within the herd, and m represents the number of decision variables in
the problem, the population matrix is formed by Eq.(2).
(1)
χ
i
: x
i,j
= lb
j
+ r.
ub
j
− lb
j
, i = 1, 2, ..., N , j = 1, 2, ..., m
Figure1. (a–d) shows the defensive behavior of the hippopotamus against the predator
136
.