0% found this document useful (0 votes)
19 views35 pages

Resource Management in Cloud and Cloud-Influenced Technologies For Internet of Things Applications

Uploaded by

swathy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views35 pages

Resource Management in Cloud and Cloud-Influenced Technologies For Internet of Things Applications

Uploaded by

swathy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 35

Resource Management in Cloud and Cloud-Influenced Technologies

for Internet of Things Applications


RATHINARAJA JEYARAJ, Kyungpook National University, Daegu, South Korea
ANANDKUMAR BALASUBRAMANIAM, Kyungpook National University, Daegu, South Korea
AJAY KUMARA M.A., Lenoir-Rhyne University, NC, USA
NADRA GUIZANI, University of Texas, Texas, USA
ANAND PAUL, Kyungpook National University, Daegu, South Korea
The trend of adopting Internet of Things (IoT) in healthcare, smart cities, Industry 4.0, etc. is increasing by means of cloud
computing, which provides on-demand storage and computation facilities over the Internet. To meet speciic requirements of
IoT applications, the cloud has also shifted its service ofering platform to its next-generation models, such as fog, mist, and
dew computing. As a result, the cloud and IoT have become part and parcel of smart applications that play signiicant roles
in improving the quality of human life. In addition to the inherent advantages of advanced cloud models, to improve the
performance of IoT applications further, it is essential to understand how the resources in the cloud and cloud-inluenced
platforms are managed to support various phases in the end-to-end IoT deployment. Considering this importance, in this article,
we provide a brief description, a systematic review, and possible research directions on every aspect of resource management
tasks, such as workload modeling, resource provisioning, workload scheduling, resource allocation, load balancing, energy
management, and resource heterogeneity in such advanced platforms, from a cloud perspective. The primary objective of this
article is to help early researchers gain insight into the underlying concepts of resource management tasks in the cloud for
IoT applications.
CCS Concepts: · Computer systems organization → Cloud computing; · Networks → IoT.
Additional Key Words and Phrases: Cloud computing, Dew computing, Edge computing, Fog computing, Internet of Things,
Load balancing, Mist computing, Resource heterogeneity, Resource provisioning, Resource scheduling, Resource allocation

1 INTRODUCTION
The Internet of Things (IoT) [1] has gained signiicant popularity in the past decade owing to its widespread
deployment in various applications [2] such as smart cities, smart healthcare, smart farming, and smart industry.
It is rapidly improving human life smarter and better than ever before by providing intelligent services with the
help of cloud technology [3], which provides storage and computation facilities (such as virtual data centers) as a
metered service and eliminates the need for an on-premise infrastructure. Owing to lexible resource provisioning
and pricing model, cloud technology has become the backbone for the successful implementation of end-to-end
Authors’ addresses: Rathinaraja Jeyaraj, [email protected], Kyungpook National University, School of Computer Science and
Engineering, Daegu, South Korea, 41566; Anandkumar Balasubramaniam, [email protected], Kyungpook National University,
School of Computer Science and Engineering, Daegu, South Korea, 41566; Ajay Kumara M.A., [email protected], Lenoir-Rhyne University,
D&H Schort School of Computing Sciences & Mathematics, NC, USA, 28601; Nadra Guizani, University of Texas, School of Computer Science
and Engineering, Austin, Texas, USA, 78712; Anand Paul, [email protected], Kyungpook National University, School of Computer
Science and Engineering, Daegu, Daegu, South Korea, 41566.

Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that
copies are not made or distributed for proit or commercial advantage and that copies bear this notice and the full citation on the irst
page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy
otherwise, or republish, to post on servers or to redistribute to lists, requires prior speciic permission and/or a fee. Request permissions from
[email protected].
© 2022 Association for Computing Machinery.
0360-0300/2022/12-ART $15.00
https://doi.org/10.1145/3571729

ACM Comput. Surv.


2 • Rathinaraja et al.

IoT applications. In general, the IoT logically interconnects and allows interoperations [4] between physical
nodes, also called end devices (mobiles, sensors, wired/wireless communication devices, such as radio-frequency
identiication systems, actuators, wearable devices, closed-circuit television systems, medical equipment) and
virtual nodes (processes, applications, virtual machines (VMs), containers) over the existing Internet infrastructure.
Objects deployed in IoT environments continuously collect heterogeneous data (numeric, document, image, video,
and audio) [5] from heterogeneous nodes, which cannot be easily processed at the end devices, because they
are power and resource-constrained. Therefore, the collected data are transferred to a cloud data center (CDC)
over the Internet, where they are stored and processed using computing and resource-intensive algorithms (such
as machine and deep learning) for decision-making processes, without human intervention in the end-to-end
application pipeline. Currently, there are several cloud-IoT platforms ofered by cloud service providers (CSPs),
such as Microsoft Aure IoT Hub [6], Amazon Web Services IoT [7], Cisco IoT Control Center [8], IBM Watson IoT
Platform [9], and Google Cloud IoT [10], to build and/or host end-to-end IoT applications.
Although the CDC ofers elastic services, when there is a plethora of end devices in the IoT environment that
generate large amounts of data, it takes considerable time to transmit the data to the CDC over the Internet for fur-
ther processing. Moreover, by the time the result is obtained at the end devices, it may be meaningless/inefective,
as the observable environment is highly dynamic for applications such as unmanned aerial vehicles (UAVs),
healthcare, and intelligent transportation systems. Therefore, to improve the quality of services (QoS) parameters
(such as latency, bandwidth consumption rate, and response time) for IoT applications, cloud technology has
transformed its shape into models, such as fog, mist, and dew computing, as shown in Figure 1, to host cloud
services with the application proximity.
Fog computing - The objective of fog computing [11] is to extend cloud services and their features from a
centralized CDC to devices such as routers and switches (through the Internet) that are located near the IoT
environment. Currently, all networking devices are manufactured with suicient storage and processing power
(possibly, with multi-core processors), capable of processing data and responding to IoT applications quickly with-
out moving the data into the cloud. This ensures that fog computing minimizes network bandwidth consumption
and response time, which are the primary QoS requirements for time-sensitive IoT applications. However, the
data are periodically updated to the CDC, as they are generated at a high cost and may have the potential to
transform the future. Thus, fog computing has become a successful platform for many IoT applications that host
a plethora of end devices that generate large amounts of data.
Mist computing - To further minimize the response time and network bandwidth consumption rate for IoT
applications, mist computing [12] was introduced. Typical mist computing devices are devices, such as switches,
routers, specialized computers (cloudlets and servers), graphics processing units (GPUs), gateways, irewalls, and
multiplexers, located in personal, local, and wide area networks. It is highly preferred for real-time applications
that require instant data processing. Some of the cloud services are hosted by these devices (instead of fog devices)
to improve certain QoS parameters for IoT applications such as smart cities, smart vehicles, and UAVs, in which
the end devices are mobile. Typically, services hosted in mist computing extend cloud features and advantages to
the end users/applications. So, the mist computing devices are conigured to be compatible and comply with fog
computing and cloud platforms. After the data is processed at the mist layer, the IoT application decides which
data needs to be stored locally and which data needs to be sent to the cloud for analysis.
Edge computing - It is a decentralized and distributed computing platform/framework that brings the applica-
tions/software closer to the devices in the IoT environment. The devices used in mist computing are used for
edge computing also, however, the service (storage and computation) hosted in these devices is not necessarily to
be the cloud service. For example, certain security features/applications and content (web, ile, video) delivery
service to mobile customers are hosted at the network base station, which need not be connected to the cloud.

ACM Comput. Surv.


Resource Management in Cloud and Cloud-Influenced Technologies for Internet of Things Applications • 3

Fig. 1. Cloud-influenced resource ofering models for IoT applications

Unsurprisingly, edge computing has been in practice since the late 1990s, even before cloud computing gained
popularity. Also, edge computing supports little interoperability, which might make IoT devices incompatible
with certain cloud services and operating systems, as edge computing does not support resource pooling, which
is one of the primary characteristics of cloud computing. Hence, edge computing is less scalable compared to
cloud/fog/mist computing.
In short, when the devices used in mist computing host standalone applications, the architecture is called edge
computing [13], and these devices are referred to as edge devices. Hence, although mist and edge computing
conceptually sound similar and use the same devices, they difer [14] based on the application (standalone or
cloud service) hosted in them. However, both mist and edge computing services ensure that data from the end
devices are processed at the devices located on the periphery of the IoT network, as close to the data source as
possible. Naturally, edge computing is not a replacement for the cloud. These two technologies work with each
other to add value through data.
Dew computing - Another version of cloud computing is dew computing [15], which is a miniature version of
mist computing. In this type of service, the cloud service is hosted on the end devices. Typical dew computing
services include Dropbox, Google Drive, iCloud, and Microsoft Teams, which are installed on smartphones and
computers. For instance, the Microsoft Teams application records online lectures/meetings on a smartphone.
These videos can be watched multiple times but cannot be copied to any other device. Therefore, some undeined

ACM Comput. Surv.


4 • Rathinaraja et al.

accesses to such videos are restricted. Moreover, the lectures need not be downloaded from the Microsoft
Teams cloud to the smartphone. This helps in minimizing bandwidth consumption and security vulnerabilities
while handling conidential data. This form of computing is widely applied in healthcare, education, military
surveillance, and many other IoT applications that focus on data privacy and security. Ultimately, dew computing
ensures the availability of cloud services and improves the user experience.
All these distributed processing platforms have their own set of strengths and weaknesses. Leveraging these
paradigms correctly will help in ensuring that the growing number of IoT devices can work eiciently. Also,
businesses can utilize fog, mist, and cloud computing together to utilize their strengths and minimize their
limitations. As these networking architectures complement each other, businesses can use them to design secure,
reliable, and highly functional IoT solutions. Understanding the essence of these emerging platforms, the authors
in [16] presented a comprehensive study of diferent network computing paradigms and research challenges,
including computation oloading, caching, security, and privacy under this hierarchical computing architecture.
In summary [17], fog computing extends cloud services to regions near the IoT environment, whereas mist
computing extends cloud services close to the network edge of the IoT environment. Moreover, dew computing
extends cloud services to the end devices. These cloud-inluenced service models primarily minimize network
bandwidth consumption, response time, and latency, by leveraging the power of various devices, such as switches,
routers, gateways, cloudlets, and mini-servers, that are available between the IoT environment and the centralized
CDC. Nonetheless, cloud and fog computing are the primary targets for heavy IoT applications that employ
machine and deep learning algorithms for decision-making. Ultimately, they have become an inevitable platform
for the successful implementation of IoT applications. Hence, the architectural advantages of these resource-
ofering models for IoT applications are quite evident. Notably, big data analytics for IoT applications has become
the most prominent application of cloud computing.
Although these cloud-powered platforms inherently improve certain QoS parameters for IoT applications,
adopting these technologies has increased the potential open research challenges that must be overcome to further
improve the performance of IoT applications. Consequently, many researchers and practitioners have focused on
solving a wide range of research problems related to resource management in the cloud and cloud-inluenced
service models. This necessitates an understanding of how the resources in these platforms are managed to support
end-to-end IoT application deployment. Considering this importance, this article presents a brief description,
a systematic review, and possible research directions on the diferent aspects of resource management in the
cloud and cloud-inluenced platforms, such as workload modeling, resource provisioning, workload scheduling,
resource allocation, load balancing, energy management, and resource heterogeneity, from a cloud perspective.
The complete low of this review article varies from that presented in [2], which discusses the research problems
and directions in the cloud-IoT platform from an IoT perspective, and that in [1], which surveys the literature on
various research eforts from a data-centric perspective. The ultimate objective of this study is to target early
researchers in the ield of clouds and IoT to aid them in gaining insight into various resource management tasks
in advanced platforms for IoT applications.
The remainder of this paper is organized as follows, as shown in Figure 2. Section 2 briely provides the major
steps followed by CSPs while ofering resources to IoT applications. Section 3 presents a detailed description of
workload characterization and modeling for a resource request. Then, a comprehensive discussion of existing
literature related to various resource management tasks in cloud and cloud-inluenced models is provided in
Section 4. A list of various cloud-IoT simulators is presented in Section 5. Finally, Section 6 concludes the article.
A list of abbreviations used throughout this paper is presented in Appendix A.

ACM Comput. Surv.


Resource Management in Cloud and Cloud-Influenced Technologies for Internet of Things Applications • 5

Fig. 2. Structure of the article

2 MAJOR STEPS IN CLOUD RESOURCE PROVISIONING


Application and resource management in the cloud and cloud-inluenced technologies play a vital role in improving
the QoS of IoT applications. In this section, we briely describe the entire service life cycle from IoT workload
submission to the resource ofering, which involves a sequence of steps that must be performed. As shown
in Figure 3, a typical CDC contains a set of racks (���� 1, ..., ����� ) each containing a set of physical machines
(PMs) (��1, ..., ��� ) connected to the top-of-rack switch, which is connected to other switches in the local
data center network. Fog and mist environments include a set of heterogeneous networking devices, such as
routers, switches, and gateways, which are registered and authorized by CSPs to be a part of the active resource
ofering environment. The primary role of these resources is to perform networking functions and execute
user-deined jobs when resources are available. However, these physical resources cannot be ofered as a service,
unlike the PMs in the CDC. Therefore, resources in fog and mist environments for IoT jobs are shared as virtual
resources, such as containers [18] (� 1, ..., �� ) or VMs [19] (� �1, ..., � �� ). Generally, container-based resource
oferings are preferred in fog/mist environments because they are signiicantly lighter than VMs in terms of size
and performance. Recently, serverless computing [20], popularly known as function-as-a-service, has received
considerable attention for executing IoT workloads by sharing the resources available in containers/VMs with
multiple users.
CSPs roughly adhere to the following four major steps to ofer these resources as a service for jobs/workloads
(�1, ..., �� ) in an IoT environment, as shown in Figure 3.
Step 1 - A set/batch of workloads is deined for an IoT application and submitted to the CSP that is responsible
for managing the resources across the cloud environment. Based on the QoS parameters mentioned in the

ACM Comput. Surv.


6 • Rathinaraja et al.

software-level agreement (SLA), the amount and type of resources are determined. Sometimes, periodic jobs are
modeled to dynamically predict the number of virtual resources that are optimal for a given workload. It begins
with workload characterization using its past execution history and the current resource requirements. This
could potentially avoid resource over/under-provisioning.
Step 2 - Once the demanded/predicted virtual resources (VMs/containers) are discovered, they are provisioned,
scheduled, and allocated to designated physical resources in the CDC/fog/mist/dew environment.
Step 3 - When virtual resources are successfully allocated in their respective locations, the workloads are
scheduled to be executed in a parallel and/or distributed fashion. Then, the data from the IoT environment are
streamed through the designated physical devices to crunch and extract useful information for decision-making
at the end devices.

Fig. 3. Workload and resource (physical/virtual) management in cloud-IoT

ACM Comput. Surv.


Resource Management in Cloud and Cloud-Influenced Technologies for Internet of Things Applications • 7

Step 4 - Finally, the provisioned resources are monitored and scaled in/out elastically based on the demand and
current load. Accordingly, resources (VMs/containers) and worklows are scheduled to satisfy the QoS parameters
speciied in the SLA. Ultimately, IoT applications receive lawless services from CSPs to make decisions in
real-time.
To obtain a glimpse of these resource management-related tasks, especially, in the cloud, Jennings et al. [21]
presented a detailed discussion of the concepts and illustrated a conceptual framework that provides a high-
level view of the functional components of a cloud resource management system and their interactions. In the
subsequent sections, we present a simple and detailed description of each topic with some recent literature,
followed by possible research questions (RQs) for further research, at the end of each section.

3 WORKLOAD
A workload refers to a job/task from an application with all possible inputs from its environment over time. Modern
workloads from IoT applications are categorized as compute, memory, and input/output (I/O) (network/disk)
intensive workloads based on the nature of the algorithms such as machine/deep learning, optimization, and
query processing. Some of these workloads are executed at the end devices (dew computing), which are power,
and resource-constrained, whereas others are executed at the devices in the mist/fog/CDC environments, based
on QoS parameters, such as response time, throughput, privacy, and security. In general, these workloads are
submitted to the CSP with the desired resource speciications; sometimes, the required resources are predicted
based on the nature of the workload. In this section, we deine diferent types of workloads from IoT applications
and describe how workload characterization and modeling can help in determining the required resources for
workload execution, which will beneit from a pay-per-use basis.
To understand the diferent types of workloads, consider a computer program, also called job (� ), for execution.
If a set of jobs is submitted together for execution, it is called batch processing, which does not interact with the
user. � may be divided into sub-activities called sub-tasks {�1 , �2 , . . . , �� }. For example, �1 may be divided into
two tasks, �1 and �2 , and �2 may be divided into four tasks, �1 , �2 , �3 , and �4 . Let us consider matrices � and � for
matrix multiplication (��). Multiplying one row of � by one column of � may be considered a task. Speciically,
a task represents the part of a job executed serially (single thread) by a processor/core. The tasks may be further
divided into sub-tasks, and the depth is not limited to exploiting parallelism.
Workloads can be loosely classiied into four categories based on the input received, job nature, job divisibility,
and varying characteristics of the workload. These are briely discussed below.
Based on the input received by a job, the workload is classiied into two types.
• Static workload - A job that is executed after all necessary inputs have already been accumulated is called
static workload; this is also referred to as batch processing. Big data IoT applications [22] are predominantly
classiied under this category. For example, to train computer vision-based applications like self-driving
cars, the data are initially accumulated in the cloud, and then deep learning algorithms are trained. Batch
processing applications are typically hosted in the cloud because storage and computational resources are
scarce in the dew/mist/fog environment.
• Dynamic workload - A job that continuously processes the data, as it arrives from its environment, is called
dynamic workload; this is also referred to as stream processing. For example, anomaly detection in the
automated manufacturing industry in real-time can save billions of dollars. Jobs of this type are scheduled in
a fog/mist environment, where the incoming data can be processed quickly to provide a real-time response.
However, training machine learning algorithms on streaming data in mist/fog environments is crucial. A
systematic survey on machine learning for stream analytics is presented in [23].
Some workloads are executed at the end devices (dew computing) to preserve data conidentiality, privacy,
and security for healthcare and military applications. In such cases, federated learning [24] is a promising

ACM Comput. Surv.


8 • Rathinaraja et al.

framework that executes machine learning algorithms in the end/edge devices itself [25]. In the case of mist and
fog computing, distributed machine learning is applied.
Based on the job nature, the workload is divided into three types.

• CPU (compute)-intensive workload - A job that spends most of its time with the processor is called CPU or
compute-intensive workload. Machine and deep learning algorithms for IoT applications [26] to perform
artiicial intelligence tasks such as computer vision and natural language processing are the best examples
of this type of workload. Applications, such as weather prediction and protein structure evaluation are
called high-performance workloads because they require thousands of cores or a cluster of nodes to perform
tera/peta-loating operations per second.
• Memory-intensive workload - A job that requires more memory capacity to support computation is called
a memory-intensive workload. IoT applications, such as anomaly detection in the smart industry, oil and
gas prediction, and smart city management, demand real-time response. To achieve this, we require to
process streaming data in memory before it is stored on disks or moved onto the cloud. Moreover, machine
and deep learning algorithms [23] need more memory to retain their intermediate results and continue a
large number of iterations. A big data processing tool, Spark [27], that supports distributed machine and
deep learning models, requires more memory to retain the intermediate results in the memory instead of
writing and reading from the disks.
• Network/disk (I/O)-intensive workload - A job that predominantly deals with I/O resources during execution
is called an I/O-intensive workload. Traic management, supply chain management, self-driving cars, etc.
generate large amounts of data, which require signiicant bandwidth and storage. Most of these applications
involve either batch or stream processing. A workload that requires large amounts of data to be processed
per unit of time is called a high-throughput workload. In general, Hadoop [28] and Spark [29] are used for
big data processing; however, leveraging the power of these tools is signiicantly challenging in fog and
mist environments, where the storage resource is constrained and highly heterogeneous. For streaming
applications, tools such as Storm [30], Kafka [31], and Spark [29], are used because they involve more
network activity than disk access activities.

Based on job divisibility, the workload is divided into two types.

• Indivisible job - A job that cannot be divided into sub-tasks is called an indivisible job; it is also called a
sequential job.
• Divisible job - A job that is divisible into sub-tasks is called a divisible job, which is further classiied into
two types.
– If the tasks are independent of each other and executed in parallel, they are called loosely coupled
tasks (also referred to as a bag of tasks), which can be executed in any order. A job with a bag of tasks
is sometimes denoted as embarrassingly parallel tasks, as parallelization can be performed with no
additional efort. Hadoop and Spark typically execute a bag of tasks [32] for various applications. IoT
applications (e.g., transaction analysis in retail management, mobility analysis in transportation, and big
data visualization) that do not require real-time responses belong to this category.
– If the tasks are dependent on each other, they are called tightly coupled tasks and cannot be executed
arbitrarily. Therefore, these tasks are executed in a certain order. Specifying the order of execution among
tasks is called a worklow [33]. Most IoT workloads that employ machine and deep learning algorithms
are designed with a worklow for execution, as data dependency exists between tasks. In-depth knowledge
of various parts of deep learning models that can be distributed to support concurrency, which can then
be executed on the dew/mist/fog devices, is briely provided in [34].

ACM Comput. Surv.


Resource Management in Cloud and Cloud-Influenced Technologies for Internet of Things Applications • 9

A worklow is typically represented by a directed acyclic graph (DAG) to exhibit dependency among tasks and
emphasize the order of execution. A typical DAG contains a set of nodes and edges to denote tasks and their
dependencies. The required resources for each task in the graph can also be represented as a matrix and input
along with the DAG. The problem with matrix representation is that, if there are � tasks in a worklow, there
has to be an � × � matrix, which may exceed the size of the available memory. However, the adjacency list
representation of the graph uses a linked list, that does not have to load all pointers (edges) into the memory
simultaneously. Graph-based NoSQL databases, such as Neo4j, Titan, OrientDB, and RedisGraph can be used to
store DAGs in a distributed manner across multiple nodes. Graph-based data processing tools, such as Pregel,
Giraph, GraphX, GraphLab, PowerGraph, and GraphCh can be used to store and extract meaningful information
from the graph data.
Based on the varying characteristics of the workload, it is divided into two types.
• Homogeneous workload - Tasks in this type of workload receive a ixed amount of data and execute the
same program with pre-deined resources. Querying event detection patterns from a stream of data is an
example of this type of workload.
• Heterogeneous workload - If the tasks in a workload receive inputs of varying sizes and execute diferent
programs that demand varying resources (apart from the initial assignment), they are called heterogeneous
workloads. For example, the amount of data processed for traic pattern analysis varies over time throughout
the day. Another example is a typical prediction model for an IoT environment that involves a sequence
of activities framed as workloads. Each task in this workload performs a diferent activity. For a multi-
modal objective [35], knowledge is extracted from diferent types of input data, which require independent
pipelines for interaction based on the worklow.
IoT jobs may fall under more than one category, which makes it diicult to manage. Signiicant research
has been published on worklow management of cloud-powered resource ofering models. Diferent aspects of
application management, such as application architecture, placement, and maintenance in a fog environment were
systematically reviewed and explained in [36] with the outcomes of various research directions. In [37], the authors
extensively discussed the worklow management system, mathematical formulation for worklow objective
functions, analysis of diferent resource-ofering models, and strengths and weaknesses of existing worklow
scheduling strategies on cloud-inluenced platforms. Similarly, the authors in [33] provided a comprehensive
literature review on worklow management system architecture and tasks, such as workload submission, resource
provisioning, and scheduling, without any manual intervention. To hide various worklow management-related
complexities from IoT users, CSPs ofer worklow management systems as a service [38], which extracts most of
the inputs for analysis from the submitted worklow itself. Using this service, jobs/tasks are submitted along
with job speciications, such as the number of tasks, their nature, dependency, and QoS. If there exists global
synchronization among the tasks, as in scientiic worklows [39], it is called bulk synchronous processing. Such
worklows are highly compute-intensive; therefore, fog and mist devices must be federated [40] with a centralized
cloud. This requires a worklow management system to automatically distribute data blocks and tasks, as detailed
in [38]. With this brief understanding, the subsequent sections provide a short note on the pressing needs of
workload characterization and modeling for IoT applications.

3.1 Workload characterization


In general, IoT users register with the CSP to submit the workload, as shown in Figure 3; the required amount of
resources is speciied while submitting the worklow of the IoT applications. The CSP provides a set of ields to
specify the desired QoS (with a range of values) apart from the SLA. Upon successful registration, a cloud service
application is connected to the IoT application environment to collect data from the end devices. Generally,
the precise resource requirements (in terms of virtual CPU (vCPU), memory, storage, and network bandwidth)

ACM Comput. Surv.


10 • Rathinaraja et al.

for IoT applications cannot be determined at the time of job submission owing to the dynamic nature of the
data stream from the IoT environment and the nature of jobs. Therefore, it is essential to build a system that
automatically identiies the nature of the workload [19] based on its characteristics (features). This is called
workload characterization and is used to dynamically predict the type (VM/container) and amount of resources
required for IoT applications. Occasionally, resources are dynamically scaled in/out based on the demand for
applications. This plays a vital role in minimizing resource costs and ofering lexible and suitable resources for
IoT applications, such that resource over- and under-provisioning are avoided. To characterize the nature of the
workload, data on a set of features related to the workloads are collected from the previous execution history and
user inputs to classify them into one of the several categories speciied in the previous section. Typical features of
the workloads are job size, latency, amount of CPU, memory, or I/O resources used, and completion time. Users
are prompted to provide inputs for some of these features while submitting the worklow to the web portal of the
CSP. However, these user inputs are optional.
Workload characterization has been successfully applied to cloud resource provisioning in various applications.
For example, to characterize web applications (social networks, video services, etc.) workloads, a previous study
[41] presented a survey of various approaches and modeling techniques used for workload characterization.
To analyze the performance of storage drives and characterize the disk resource requirement (read or write)
of applications, researchers [42] used features such as transfer length, access pattern, resource utilization, and
throughput for modeling the workloads. In contrast, another study [43] performed VM workload characterization
using the trace collected from the hypervisor for eicient resource allocation to the worklow. A similar approach
was followed in [44] to characterize the workload of container-based services. Similarly, Hadoop coniguration
parameters were characterized in [45] using previous job execution history to improve resource-allocation
decisions. Moreover, to characterize Hadoop workloads, another study [46] used ensemble learning and metric
importance analysis to quantify the importance of workloads. Occasionally, I/O workloads are characterized to
determine the appropriate data mining algorithm for I/O traic analysis [47].
RQ 1: Can we characterize the IoT workloads to determine the type of virtual resource (VM/container), including
serverless computing, to be hosted on a cloud-inluenced platform?
Characterizing IoT workloads helps us determine the type and quantity of resources required to accomplish
tasks. Several studies have been conducted to determine the type and size of VM [43] and containers [44] that can
deploy the applications. The authors used hypervisor/container traces to extract features related to diferent types
of workloads. Similarly, another study [48] discussed how workloads were characterized to achieve serverless
computing on top of VMs in the cloud. However, characterizing IoT workloads to determine the type and size of
virtual resources, including the incorporation of serverless computing, could minimize the cost and latency for
IoT applications, which could be a potential scope in this direction.

3.2 Workload modeling


Inferring the characteristics and patterns of IoT workloads while exploiting the power of resources in a mist-
fog-cloud resource environment to guarantee a certain QoS is challenging. Experimenting in real-time with
every new idea is also time and efort intensive. Therefore, mist-fog-cloud environments must be simulated to
understand the resource utilization for various workloads. Workload modeling [49] is useful for analyzing the
performance of systems and applications by simulating diferent ideas in a short time before deploying them
in real-time. Primarily, it is also the process of building a real environment into a software-based environment
to perform the simulation. In general, benchmark programs [50] are typically used to identify/measure the
performance of cloud service systems by parameter values in real-time. However, performing the same work in a
simulated environment to identify the performance and resource usage behavior is signiicantly easy. In this
aspect, workload characterization is the irst step [51] in workload modeling. After choosing the correct features

ACM Comput. Surv.


Resource Management in Cloud and Cloud-Influenced Technologies for Internet of Things Applications • 11

of workloads, a mathematical model is constructed to relect real workloads behavior, using which it is possible
to infer the amount and type of resources required.
Moreover, workload modeling helps to generate synthetic workloads for certain problems where the workload
is not publicly available to conduct performance evaluation with new algorithms/ideas. For instance, to evaluate
the performance of infrastructure as a service (IaaS), which is a cluster of virtual/physical resources, on scalable
workloads, benchmark programs were modeled [52] with diferent performance metrics. Such synthetic workloads
must represent the workload of a real IoT system, and the result must align with the outcome of the real application.
In practice, setting up a real cloud-fog-mist execution environment to perform empirical experiments on IoT
workloads is not feasible. Because CSPs usually do not reveal their workload execution history, researchers
do not have the opportunity to work with real workloads in a distributed cloud-fog environment. However,
they can build a model to mimic the workloads for simulation based on traces/logs of workload executed in a
cloud-fog-mist environment. For example, when the arrival times of a set of � jobs are available,
• the arrival time of the (� + 1)�ℎ job can be predicted using a Poisson distribution,
• the inter-arrival time of subsequent jobs can be determined using an exponential distribution.
Here, an arrival time of zero in the set of tasks indicates a batch processing job. Sometimes, a set of tasks arriving
at diferent times are collected and processed as a batch. The downside of batch processing is that a small task that
arrives irst may have to wait for a long period until a batch is formed and suicient resources are available for the
entire batch. However, the advantage of this type of processing is that workload characterization can be performed
before dispatching the batch and resources can be precisely predicted and allocated. In [51], the authors built a
statistical model based on a mixed reality framework called MR-Leo to understand resource utilization and QoS
for real-time applications in an edge computing environment. The behavioral patterns of tasks in the CDC were
modeled using the CloudSim framework in [49]. The authors extensively analyzed various features over millions
of tasks submitted by hundreds of users. Recently, owing to the heterogeneity in cloud-powered resource-ofering
environments, workload modeling has become more challenging. For example, researchers in [53] proposed a
hierarchical stochastic modeling approach to build a workload model in a heterogeneous IaaS environment for
workloads that demand more vCPUs. To identify the performance of the system for data-intensive workloads,
another study [54] proposed a proactive machine-learning-based methodology for resource provisioning called
ProMLB, which builds performance models for cross-platform applications in a virtualized environment. The
topological structure of the graph model was used in [55] to explain the interconnection among micro-services
in terms of the computing resource demand and latency of IoT applications.
RQ 2: Can we build a model to estimate the task latency, completion time, and cost of resources for heterogeneous
IoT workloads?
Estimating these QoS parameters can help the CSP to determine the adequate amount of resources that can be
allocated for IoT workloads at a reasonable cost. It will also help in choosing the appropriate target physical devices
in the heterogeneous mist/fog environment to launch the virtual resources and run the tasks. Ultimately, resource
over- and under-provisioning can be controlled across cloud-powered platforms. To achieve such elastic resource
management at the network edge for IoT applications, the authors in [56] forecasted the cost and deadline for
homogeneous workloads to adjust resource allocation. However, estimating the QoS of heterogeneous workloads
is highly complex and challenging because the target environment is also heterogeneous. This could be a potential
research direction for further exploration in the ield of cloud-IoT integration.

4 RESOURCE MANAGEMENT
Based on the elaborate discussion in the previous sections, a typical IoT workload may overlap in more than
one category. To improve the performance of these workloads in terms of QoS parameters such as hired virtual
resource utilization, latency, and cost, CSPs must eiciently handle resource management tasks (resource discovery,

ACM Comput. Surv.


12 • Rathinaraja et al.

provisioning, scheduling, allocation, and load balancing) across cloud-powered platforms (mist, fog, and cloud)
for IoT applications. To provide a glimpse of all these tasks, the authors in [57] summarized some of the CSP
responsibilities in managing resources in the cloud, fog, cloudlet, mist, and edge environments for various
IoT applications. Similarly, container-based resource management in the cloud and fog environments for IoT
applications was discussed in [18]. The authors proposed a container fog-node-based cloud orchestration for IoT
networks called CF-CloudOrch that uses software-deined networks (SDNs) that centrally manages networking
functions for orchestrating containers on fog nodes, in addition to VMs in CDC, to improve performance and
security. To envisage resource formation in the IoT environment and computation platform, the authors in [58]
elaborately demonstrated the diferent fog computing resource management architectures, infrastructures, and
types of algorithms for improving various QoS parameters that were published between 2013-2018. Reference [59]
summarized various fog resource management approaches considering lightweight resources, and Reference [60]
comprehensively presented a critical review of SDN integration to the fog environment for handling networking-
related challenges in the IoT. An ElasticFog framework was proposed in [61] to manage a set of containers on
fog nodes based on network traic. It proportionally allocated resources to containers considering the network
traic and orchestrates them in real-time. The service collision problem in fog resource management between
cyber-physical systems and the cloud was addressed in [62].
A comprehensive survey on data center resource management to optimize resource utilization was presented
in [63]. The authors categorized the topic into three sections: VM-, server-, and application-based resource
management. When services for IoT applications are hosted in multiple CSPs owing to the diverse locations
of IoT applications, resource management in the cloud federation [64] must abide by SLA. To obtain optimal
solutions in such a complex environment, various evolutionary algorithms from the user and CSP perspectives
were discussed in [65]. As IoT applications are always resource-hungry and have real-time QoS constraints,
[66] proposed a context-aware uniied resource management scheme to oversee and manage loads across fog
nodes based on the contexts of the user requests, resource availability, and corresponding deadlines. Although
the aforementioned frameworks aided in resource management, the expansion of the IoT environment and
heterogeneous cloud-powered resource-ofering models increase resource management complexity. Therefore,
in this section, we provide a detailed description of diferent resource management tasks, as shown in Figure 3,
along with suitable literature, followed by possible RQ(s) from a cloud perspective.

4.1 Resource discovery


After the workload is submitted to the CSP, the resource discovery task identiies suitable virtual resources for IoT
applications based on the workload type, resource demand, and QoS parameters mentioned in the SLA. Initially,
the workloads are characterized based on diferent distinct features. Then, the workloads are modeled to simulate
the worklow by considering a diferent combination of resource types, targeting the QoS parameters mentioned
in the SLA. Finally, suitable resource types (VMs/containers) [19] and other IO resources are identiied for the
intended workloads. Generally, it is diicult to perform a detailed study on individual CSPs about the service type
and other applications ofered for end-to-end IoT implementation. To bypass this time-consuming exploration,
service brokers (SBs) interact with various CSPs and update their service delivery options and features to the
users. When resource requests are submitted to the SB, a suitable CSP is suggested with a detailed service plan.
In general, CSPs and SBs perform workload modeling to predict the type of resource plan that is suitable for
applications. Users can compare CSPs based on various criteria and select suitable CSPs for short/long-term
contracts. In addition to the cloud service fee, users are expected to pay broker commissions to consume this
intermediate service.
An adaptive recommendation system was proposed in [19] to determine the type of VM and its coniguration for
managing IoT workloads on edge devices in addition to cloud resources. First, the IoT workloads are characterized

ACM Comput. Surv.


Resource Management in Cloud and Cloud-Influenced Technologies for Internet of Things Applications • 13

using the information obtained from the virtualization layer. Subsequently, the extreme gradient boosting
algorithm was used to predict the virtual resources based on workload characterization. This study achieved
a 15% improvement in prediction accuracy when compared to other state-of-the-art methods. The authors
of [50] proposed a benchmarking methodology to identify VMs that can improve the overall performance of
applications in a cloud environment. Users were instructed to provide a range of values for vCores, memory,
and storage for VMs. These attributes were then mapped to a set of VM types to identify one that improves
resource utilization. Resource prediction in the cloud, fog, and edge nodes was performed in [67]. The authors
proposed an architecture that includes cloud, fog, and edge resource managers to collaborate and orchestrate
containers/VMs across distributed environments deployed on top of the open-source cloud deployment software,
called Openstack. CloudLaunch [68], a resource provisioner for applications submitted to the cloud, identiies
suitable resources in multiple cloud environments to preserve all QoS parameters mentioned in the SLA. It
logically combines resources from both hybrid and federated clouds to facilitate region-based sub-services to
improve the QoS parameters.
RQ 3: Is it possible to build a recommender system to suggest serverless execution within containers and/or VMs
for heterogeneous workloads on cloud-inluenced resource ofering models?
Based on the literature discussed above, virtual resources such as containers and VMs, are commonly rec-
ommended for IoT workloads in fog and cloud environments. For certain IoT workloads that do not require
dedicated virtual resources, serverless computing, which shares the underlying virtual resources and the program,
would be more suitable. A systematic and comprehensive literature review was provided in [20] to understand
the advantages of serverless computing for IoT workloads. To launch a workload on the serverless platform, the
available resources in the containers and VMs (running in a mist/fog/cloud environment) must be predicted.
However, this involves the challenge of determining the completion time of workloads that are already running
in these virtual resources.

4.2 Resource provisioning


Ofering the discovered virtual resources to the on-demand IoT applications in mist, fog, and cloud environments
is called resource provisioning. CSPs ofer hardware, software, applications, etc., and almost anything associated
with a computer is ofered as a service by CSPs. Once IoT workloads are characterized and analyzed to predict
their resource requirements, virtual resources are provided by the CSP based on diferent schemes. Particularly,
VMs are ofered on diferent contracts, namely spot (opportunistic) instances, on-demand instances, reserved
instances, and advanced reservations. These instances are discussed subsequently and a brief comparison is
provided in Table 1.
In general, a spawned copy of a VM or container is called an instance. Spawned instances are then launched in
devices located in the cloud, fog, and mist environments. CSPs ofer these virtual instances under the following
categories based on the outcome of workload characterization:
• Resource types (in Azure [69]): general-purpose, compute-optimized, memory-optimized, and storage-
optimized resources, GPU for high-performance computing.
• Diferent service plans (VM lavors in Azure [70]): extra small, small, medium, large, and extra-large.
Users are charged based on the VM type, service plan, and usage period. Currently, GPUs are also ofered as a
service in addition to the service plan for big data processing and deep learning applications of IoT workloads.
There are three ways in which a typical big data processing tool, such as Hadoop, Spark, can be ofered as a
service.
• Hire multiple VMs and manually setup Hadoop/Spark
• Hadoop/Spark is ofered as a service

ACM Comput. Surv.


14 • Rathinaraja et al.

Table 1. Characteristics of diferent instance types

Characteristics Spot instance On-demand instance Reserved instance Advanced reservation


Price low medium high high
QoS guarantee no yes yes yes
Under/over provisioning no no yes yes
Resource-sharing type time time/space space space
Usage plan short-term medium-term long-term long-term

• Paying per-job-basis (serverless computing)


In the irst approach, VMs of speciic types and plans are initially hired. Then, distributed data processing tools,
such as Hadoop, Spark, and Storm are manually installed based on the nature of the IoT workload. Once the
platform is ready, the service endpoints are linked to the IoT application environment to collect and process data.
In the second approach, upon submitting a batch of jobs, the CSP or SB can predict a suitable service plan and
the number of virtual resources deployed in the virtual cluster. Then, the required big data tools are orchestrated
to install on the virtual cluster. These two approaches deploy dedicated virtual clusters for each user. Virtual
resources are hosted in the edge/mist/fog devices to provide a quick response. For instance, a hut architecture
[71] was proposed to perform complex event processing on stream data for smart city applications. However,
these two approaches are preferably hosted by the CDC to crunch a large amount of data oline.
The third approach is based on paying per-job-basis [72], in which the Spark/Hadoop cluster is already up and
running. The data from the IoT environment are streamed into virtual resources and then processed with the
user-deine job. The service price is determined based on the amount of storage consumed and the number of jobs
launched (or vCPU time consumed) and workloads are prioritized for execution based on the QoS parameters.
For example, Google Colab charges the users based on the amount of resources (memory and virtual GPU cores)
consumed by each job. However, certain QoS parameters are not guaranteed, as the cluster is shared by multiple
users/workloads running diferent types of jobs. This type of service signiicantly abstracts the complexities and
ofer more lexibility to the users.
To provide virtual resources for lexible pricing and improve resource utilization, diferent types of virtual
instances are provided, as described below.
Spot (opportunistic) instance - Creating and launching virtual resources involve several parameters such as
time and energy. Deleting such resources after a short period wastes the time and energy invested in creating
them. These resources can be swapped for other applications on demand. This is called a spot instance [73], which
is a short-term service plan that allows users to bid on unused resources at a lower price, typically at discounted
rates of up to 90% [74], as compared to regular virtual instances. However, spot instances do not comply with the
SLA to guarantee QoS. As the price is cheaper, these types of instances may be hosted on random devices and
suspended at any time to claim physical resources for regular on-demand/reserved instances. Therefore, spot
instances may become unavailable at any time without prior notice to the user. Hence, this type of instance is only
suitable for time-insensitive applications such as training machine learning algorithms, scientiic computation,
and batch processing. Another disadvantage of spot instances is that they cannot be scaled out on-demand, and
cannot use the underlying physical resources on a space-sharing basis. Moreover, the price of spot instances
periodically changes based on demand. Owing to these disadvantages, they are less suitable for IoT applications,
which are mostly latency-sensitive.
On-demand instance - This involves an intermediate-term service plan that allows users to pay on a usage
basis. When an application demands additional resources to handle the increasing workload, new instances are
spawned and attached to existing services. Unlike spot instances, the price of on-demand instances does not

ACM Comput. Surv.


Resource Management in Cloud and Cloud-Influenced Technologies for Internet of Things Applications • 15

instantly vary, and the QoS promised for on-demand instances is guaranteed, as mentioned in SLA. These types
of instances are suitable for all types of IoT workloads because they are scalable.
Reserved instance - In this service plan, users must pay for certain instances before acquiring control. It is
signiicantly cheaper than the on-demand instance, as payments are made in advance, irrespective of whether the
resources are used. These types of instances are highly suitable for big data-related IoT applications in the cloud.
Advanced reservation - This is a long-term plan, in which a part of the CDC is reserved/contracted for the long
term, where virtual instances of other users are not hosted. This plan ofers a superior option for time-sensitive
applications. For example, multimedia-streaming-based IoT applications require seamless end-to-end services for
lawless video data delivery. However, a similar scheme can be used for IoT applications when cloudlets and mini
data centers are a part of the mist/fog computing environment.
Several studies have been conducted to determine the type of virtual resources based on the nature of workloads.
ProMLB was proposed in [54] for resource provisioning. It initially performed workload characterization to
capture the application behavior and its performance. Then, the bounded knapsack algorithm was used for the
resource allocation of diferent VM types with minimal processing overhead. In certain cases, predicting the
workload type [75] helps in choosing the appropriate type of VM, as discussed in [63]. Similarly, the Bayesian
learning technique [76] was used to predict the type of workload processed in a fog-cloud environment. Based on
this prediction, the resources of containers and VMs were adjusted. However, for a static workload, the correct
VM size and type can be easily predicted. To identify the precise resource type for workloads, researchers [77]
performed pattern mining on workloads to predict the number of resources based on the latest occurrence. Based
on this literature, we set up the following RQs for further research.
RQ 4: Is it possible to predict spot instances and provide them on a cloud-inluenced platform to minimize service
costs?
Spot instances are unpredictable. Nevertheless, these discounted and inexpensive resources are useful for IoT
applications as they ofer low-cost services. However, ensuring service reliability and availability remains a key
research challenge.
RQ 5: Can we develop an algorithm to predict the spot instance lifetime?
The lifetime of spot instances is unknown. This uncertainty makes the usefulness of spot instances questionable
for IoT applications. Therefore, developing an algorithm that predicts the lifetime of spot instances is important
for improving service reliability on cloud-inluenced platforms. As spot instances are generally available in the
cloud, providing them in a cloud-inluenced environment that contains servers, cloudlets, mini-servers, etc.,
would beneit IoT applications.

4.3 Resource and workload scheduling


Scheduling plays a vital role in improving several QoS parameters for workload and resource management in
cloud-inluenced platforms. As scheduling decisions afect multiple QoS parameters, such as latency, makespan,
cost, and resource utilization, the scheduling-related research problems must be understood and addressed to
improve the service and application performance. In general, there are two scheduling-levels: platform- and
application-level.
• Platform-level scheduling - It is a task of mapping [78] virtual resources (VMs/containers) to the physical
resources in a cloud-inluenced platform. This is performed in two phases in a highly automated workload
management system. First, based on workload characterization, the appropriate VM/container type and its
service plan are identiied. Then, they are mapped to the devices in the physical layers (cloud, fog, and mist
environments).

ACM Comput. Surv.


16 • Rathinaraja et al.

• Application-level scheduling - It is a task of mapping [33], [38] workloads (jobs/tasks) to virtual/physical


resources. It is also termed workload placement.
The growing scale of heterogeneous resources in cloud-inluenced environments and the increasing complexity
of workload speciications aggravate scheduling challenges and the trade-of between application-related QoS
and the objectives of the CSP. Moreover, the scheduling decision time also afects the performance of applications.
Generally, a scheduling decision is made at diferent times.
• Static scheduling - The scheduling decision is made at the time of resource launch and workload submission.
This decision does not change until the completion of the workload and service period. This type of
scheduling is highly important for platform-level scheduling, in which virtual resources are held for a
long time, once they are mapped to the physical devices. However, it is also applicable to application-level
scheduling when big data-related jobs are submitted.
• Dynamic scheduling - The initial scheduling decision is changed during execution to maintain certain QoS
parameters, as data and tasks from IoT applications are periodic and dynamic. This is highly suitable for
application-level scheduling.
To understand the scheduling type more generically, an example scenario for static- and application-level
scheduling is discussed in detail below. This job scheduling problem is based on the discussion in [17], in which a
set of � jobs/tasks is mapped to a set of � VMs. The objective was to minimize the makespan subject to a set of
constraints.
The problem description is given below.
• There are four jobs in a batch.
• There are seven processors available for scheduling.
• Each job requests three processors.
• The latency of each job is 20, 15, 30, and 10 time units.
• Job execution begins only after all the resources are obtained.
• The makespan should be minimized.
The parameters used are
• Processors : { �1 , � 2 , ..., �7 }
• Jobs : { �1 , �2 , �3 , �4 }
• � � : Number of processors requested by job � �
• �������(� � (� � )) : latency of job � � with required processors
The problem deinition is formulated as given below.
• Problem deinition : Mapping � � to � �
P
• Objective function : ���(��������) = 4�=1 �������(� � (� � ))
P4
• Subject to constraints : �=1 � � ≤ �
Table 2 displays the order of execution of jobs in diferent schedules (�� ) resulting in diferent makespans. The
Gantt chart is useful for visualizing these schedules, as shown in Figure 4. In the irst schedule (� 1 ), �1 and �2 are
initially assigned to three processors for execution. After 15 time units, �2 releases its resources, which are then
allocated to �3 . Once �1 is complete, its processors are assigned to �4 . Thus, the makespan for � 1 was 45 time units.
After exploring multiple schedules, � 2 was determined to be superior to other schedules as it exhibited the least
makespan among all schedules.
The application- and platform-level scheduling performances in cloud-IoT environments are evaluated based
on diferent QoS parameters, such as response time, latency, throughput, resource utilization, cost, deadline,
energy eiciency, elasticity, scalability, availability, security, and SLA violation. Achieving one or more QoS

ACM Comput. Surv.


Resource Management in Cloud and Cloud-Influenced Technologies for Internet of Things Applications • 17

parameters is always a nondeterministic polynomial time (NP)-hard problem; however, the problem can be
mapped to an NP-complete problem with user-speciied constraint(s) to obtain an optimal solution, that is
currently favorable, using meta-heuristic optimization algorithms. In this section, we review diferent literature
based on two important research aspects that are currently being studied:
(1) Handling and exploiting heterogeneity that exists at diferent-levels (hardware, virtual resources, perfor-
mance, workload) require constant research, as it is growing due to technological advancements.
(2) Minimizing energy consumption during execution is another concern, as it is a global issue for tackling
environmental challenges.
Table 3 summarizes some of the recent works proposed to handle heterogeneity and improve energy eiciency,
with a major focus on application-level scheduling on a cloud-inluenced platform based on diferent QoS
parameters.
4.3.1 Platform-level scheduling. A set of VMs/containers in the virtual cluster for an application is mapped to a
set of physical resources in the cloud and fog/mist environments. This scheduling problem was modeled as a
mixed-integer linear programming problem in [79] to improve the power and network bandwidth consumption
and overall resource utilization in the virtual cluster. Priority and multi-dimensional heterogeneous resources
are included in the problem formulation. The authors improved the network consumption, power consumption,
and resource wastage by up to 29%, 18%, and 68%, respectively. The application of deep learning algorithms to
model complex non-linear patterns in the scheduling problem is also increasing. Deep reinforcement learning-
based resource scheduling was elaborately surveyed in [80] to improve various QoS parameters while using
meta-heuristic optimization algorithms.
4.3.2 Application-level scheduling. Application-level scheduling assigns jobs/tasks to the right virtual resources
(VMs/containers) and is generally referred to as job or task scheduling. It is also called worklow scheduling if

Table 2. Diferent schedules (combinations of jobs)

Jobs �1 �2 �3 �4
Makespan
Service demand 20 15 30 10
�1 1 1 2 3 45
�2 2 1 1 3 40
Schedules (S)
�3 2 1 3 1 45

Fig. 4. Gant chart for job scheduling

ACM Comput. Surv.


18 • Rathinaraja et al.

Table 3. Literature review on scheduling strategies

Scheduling types Scheduling criteria


Article Environment QoS
Platform Application Energy Heterogeneity
[66] ✓ Fog Deadline
[79] ✓ ✓ ✓ Cloud Power, network traic, and resource utilization
[99] ✓ ✓ Fog Latency and network usage
[100] ✓ ✓ Cloud, fog Resource utilization and service delay
[90] ✓ ✓ Cloud Resource utilization, power, and makespan
[95] ✓ Cloud Makespan
[80] ✓ Cloud Makespan
[98] ✓ Cloud Response time
[103] ✓ ✓ Cloud, fog Makespan, monetary, and energy costs
[87] ✓ Cloud, fog Deadline and energy
[88] ✓ Fog Resource utilization and average response time
[89] ✓ Cloud, fog Makespan and throughput
[84] ✓ Fog Makespan and scheduling length ratio
[85] ✓ ✓ Cloud, fog Latency and energy consumption
[91] ✓ ✓ Cloud, fog Service time and resource utilization
[92] ✓ ✓ Cloud, fog Energy
[93] ✓ ✓ Cloud, fog Makespan and resource utilization
[101] ✓ ✓ Fog Energy and makespan
[102] ✓ ✓ Fog Deadline and response time
[94] ✓ Cloud Makespan and resource utilization
[86] ✓ ✓ Cloud, fog Makespan and energy
[96] ✓ ✓ Cloud, fog Response time, reliability, and inancial cost
[97] ✓ ✓ Cloud, fog Latency and makespan

multiple tasks exist in the job, as described by a DAG. Typical IoT workloads query stream data, incorporate
machine/deep learning tasks to build a model or perform data mining tasks to identify useful patterns. Reference
[37] provides a systematic review and summary of various scheduling strategies and mathematical formulations
of scheduling problems based on diferent QoS parameters. In contrast, another study [81] surveyed various
scheduling strategies based on diferent QoS parameters and performed a comparative analysis under diferent
categories. Some rule-based heuristic algorithms [82], such as minimum completion time, minimum execution
time, max-min, min-min, and expectation-maximization [83] are preferred for optimizing various QoS parameters.
The worklow represented in the DAG structure (unlike a bag of tasks) was scheduled in fog nodes using a
list-based scheduling algorithm in [84] by considering the computation cost and deadline. The system operates in
three steps. First, it identiies independent tasks and sorts them sequentially. Then, the priority for each task is
assigned based on the number of successors. Finally, based on the priority and computation weights, nodes were
selected for the assignment. A weighted-cost model for task scheduling was proposed in [85] to minimize the
latency and energy consumption while scheduling concurrent applications in an IoT environment. To improve
the performance in healthcare applications, a mobility-aware heuristic-based task scheduling and allocation
strategy was proposed in [86] to dynamically balance tasks based on the temporal/spatial movement of patients.
These heuristic-based algorithms are suicient for a temporary solution; however, they are not an optimal
solution. Therefore, meta-heuristic algorithms are preferred for achieving a global optimal solution in a short
time. In [82], various heuristic and meta-heuristic algorithms that are used for the platform- and application-level
scheduling problems were systematically surveyed. The authors provide useful suggestions and recommendations
regarding the simulation of scheduling strategies using simulators such as CloudSim. To meet the deadline,
preserve priority constraints in IoT applications, and minimize energy consumption, a laxity and ant colony system

ACM Comput. Surv.


Resource Management in Cloud and Cloud-Influenced Technologies for Internet of Things Applications • 19

scheduling algorithm was employed in [87]. To optimize resource utilization and average response time, a novel
bio-inspired hybrid algorithm that combines modiied particle swarm optimization and cat swarm optimization
was proposed in [88] and implemented using the iFogSim simulator. An artiicial ecosystem-based optimization
[89] was used for task scheduling in a cloud-fog environment to improve the makespan and throughput. The
proposed algorithm was then compared with standard optimization techniques. Reference [90] introduced a
power-aware scheduling algorithm to assign tasks to suitable VMs. By calculating the weight for each VM
using an optimization algorithm, the VMs were grouped into several categories to attract the tasks. In [91], to
identify the appropriate fog nodes for mapping mobile applications in an IoT environment, nodes in a cloud-fog
environment were determined using spectral clustering based on computational performance and communication
delay.
To improve energy eiciency during task scheduling on cloud-fog nodes and maintain the deadline, a multi-
objective optimization algorithm was proposed in [92] to optimize the trade-of between these two conlicting
objectives. The authors achieved up to 50% improvement in energy consumption when compared to state-of-the-
art methods. In a similar approach, multi-criteria intelligent scheduling was proposed in [93] using game theory.
Initially, a preference function was deined to calculate the rank of each node in a fog environment based on
latency and resource utilization. Subsequently, a node was chosen for the given task based on the matching theory
and preference function. To efectively handle the task scheduling of big data applications in a cloud environment,
a multi-objective optimization dragonly algorithm was employed in [94] to improve makespan and resource
utilization. To exploit the local optimum, a hill-climbing algorithm was used to support the dragonly algorithm.
Reference [95] introduced a method to minimize makespan by mapping the tasks in a worklow to VMs in the
CDC using cloud-aware provenance information to reprovision VMs of the same coniguration at a later time for
similar tasks. For geographically distributed CDCs, achieving a tradeof between bandwidth consumption and
makespan is signiicantly challenging when scheduling worklow. To simultaneously improve both parameters,
a Pareto-based multi-objective optimization was used in [96]. For a similar case, a genetic algorithm was used
in [97]. Machine learning algorithms are extensively used for scheduling in cloud-based models. An intelligent
deep learning-based job scheduler was proposed in [98] to map jobs to VMs in a cloud environment. The authors
minimized the average job response time by up to 40.4% by guaranteeing 93% QoS.
Micro-service-based IoT application scheduling in a fog environment was demonstrated in [99] by exploiting
resource scalability during execution to improve latency and network bandwidth consumption. The authors used
iFogSim to simulate and observe improvements in the latency and network bandwidth consumption of up to 85%.
In [66], a context-aware task scheduling was proposed to map tasks on VMs in a fog environment. It considers
the user application request context, resource availability, and deadline to schedule tasks. In heterogeneous
environments, multi-cloud multi-fog scheduling algorithms sufer from improvements in energy eiciency, service
delay, and resource utilization. In [100], the authors considered such a complex scenario for scheduling tasks from
various IoT applications to containers and VMs. They proposed two service models: long-term and temporary
service models. The temporary service model serves requests/tasks in real-time, whereas the long-term service
model is based on a subscription-publishing strategy. A multi-objective estimation algorithm [101] was designed
to improve energy eiciency and makespan in a fog environment for IoT task assignment problems. The authors
divided the worklow graph and enumerated task permutations to select the target processing element. A fog
server hierarchy was considered in [102] to distribute the data collected from an industrial IoT environment,
based on a queueing model that considers high and low priority. Heterogeneous processing power was also
considered to subsequently oload the task. To schedule tasks on fog nodes that exist in a highly decentralized
environment and nodes in the CDC, a novel two-tier bipartite graph with fuzzy clustering was proposed in [103].
The algorithm attempts to improve the makespan, monetary and energy costs.
Based on the motivation (to improve parameters such as resource utilization and makespan) obtained from an
extensive literature survey on application-level scheduling, we propose an interesting RQ that will beneit from

ACM Comput. Surv.


20 • Rathinaraja et al.

further research.
RQ 6: Can we dynamically determine the number of virtual resources (VM/container) required at each level for a
worklow represented in a DAG?
Generally, IoT workloads are submitted in the form of a worklow represented in a DAG, which comprises
multiple levels with a diferent number of tasks at each level. Once the virtual resources are allocated for worklow
execution, they are operational, regardless of whether they receive the tasks. Therefore, we pay for resources that
are idle, especially in mist/fog environments where resources are crucial. To avoid this situation, before packing
up a set of tasks into VMs/containers, we must calculate the amount of data, resources, and communication
cost for the tasks at each level. Based on this information and the task dependency, we can predict the required
number of VMs/containers and if their size for each level in the worklow is suicient for execution. This will
certainly minimize resource costs and improve virtual resource utilization for IoT workloads.

4.4 Resource allocation


Once an optimal schedule is obtained, resources such as CPU, memory, disk capacity, and network bandwidth in
physical resources that are available in the cloud-inluenced platform are reserved for virtual resources based
on platform-level scheduling. Similarly, the resources for IoT workloads are allocated to the respective virtual
resources for application-level scheduling. This is generally called resource allocation, which is a potential subtask
in workload scheduling. Once resources are allocated, the service endpoint is connected to the users/applications.
It also plays a vital role in improving the application performance in the cloud, fog, and mist environments. In
[104], a systematic survey of various resource-allocation strategies based on diferent QoS parameters in an
IoT-cloud environment is provided. Typically, there are two resource-allocation strategies (Figure 5): time- and
space-sharing.
Space-sharing strategy (exclusive mode) [39], [63] - Once a job/task/VM/container is assigned with the
underlying resources, for instance, 2 GB memory, dual-core (core1, core2), storage (hard disk drive - HDD1),
and network interface card (NIC1), these are not shared with any other job/task/VM/container until completion
or termination. This is called space-sharing mode or exclusive resource allocation and is primarily used for
time-sensitive IoT applications.
Time-sharing strategy [67], [105] - Allocated resources for a job/task/VM/container are shared with another
job/task/VM/container in a time-interleaved manner. This is called time-sharing resource allocation. In this mode,
the application performance is not guaranteed owing to frequent context-switching. However, the service fee in
the time-sharing model is relatively lower than that in the space-sharing model.
Regardless of the aforementioned sharing strategies, the inherent sharing capabilities of the underlying
hardware resources vary. For instance, memory and CPU are time- and space-shared. However, I/O resources
are always time-shared, as shown in Figure 5. The sharing capabilities of resources also vary according to the
service plan. For example, the VMs of diferent users can operate using time-sharing (for on-demand and spot
instances) or space-sharing (for reserved instances) strategies. For data-intensive tasks, space-sharing is preferred
to minimize the number of context switches while compute-intensive applications prefer time-sharing.
Similar to scheduling, the resource-allocation decision is also either static or dynamic.
Static resource allocation [63] - does not change the resources allocated to a job until its completion.
Dynamic resource allocation [105] - changes the allocated resources during execution. Such jobs are called
malleable jobs.
Table 4 presents a quick summary of diferent resource-allocation strategies in a cloud-inluenced environment
based on diferent QoS values. An energy-aware resource-allocation method was proposed in [39] to support the

ACM Comput. Surv.


Resource Management in Cloud and Cloud-Influenced Technologies for Internet of Things Applications • 21

worklows submitted across the CDC. It dynamically deployed VMs based on worklow executions. First, the
authors proposed a model to minimize the energy consumption of applications launched across cloud platforms.
Subsequently, an energy-aware resource-allocation method was used to reserve resources for worklows in a fog
environment. This is critical to support IoT applications for resource allocation in a large distributed environment
(mist-fog-cloud). In [63], the authors discussed resource prediction at the VM, application, and PM levels based on
workload demands. In the same environment, resources are managed based on horizontal/vertical task oloading
[67]. In vertical task oloading, if the resource demand increases, the resources are dynamically allowed to be
consumed from the hosting virtual node. In a cloud environment, [105] proposed a workload placement algorithm
that allows tasks to obtain their required resources for each VM/container, including a private network. In a fog
environment, device-to-device communication for resource-sharing to execute of-loaded tasks using the ant
colony optimization algorithm and earliest deadline-irst algorithm were proposed in [106]. Nodes with adequate
energy and resources can be used in this model.
A novel market-based resource-allocation method was proposed in [107], in which services and fog resources
act as buyers and divisible goods. The authors attempted to determine the market equilibrium attained when
every service receives the required resources. In [108], the authors proposed a ranking-based resource allocation
method that dynamically provisions resources depending on the dynamic resource demand of the application.
Sometimes, predicting the workload demand can help estimate the required resources instead of assigning them
at random while ofering IaaS in a cloud environment. Reference [109] applied this idea by considering the
task execution history. A Petri net-based resource-allocation strategy was proposed in [110] for precisely and

Fig. 5. Resource allocation strategies

Table 4. Literature review on resource allocation strategies

Resource-sharing type
Article Environment QoS
Time Space Static Dynamic
[39] ✓ ✓ Cloud Cost, bandwidth, and latency
[63] ✓ ✓ Cloud, fog, and mist Resource utilization
[67] ✓ ✓ Cloud, fog, and mist Resource utilization
[105] ✓ ✓ Cloud Resource utilization
[106] ✓ ✓ Fog Cost and energy
[107] ✓ ✓ Fog Resource utilization and response time
[108] ✓ ✓ Fog Deadline and cost
[109] ✓ ✓ Cloud Latency
[110] ✓ Fog Price and deadline
[111] ✓ ✓ Cloud Resource utilization
[56] ✓ ✓ Edge and cloud Resource utilization

ACM Comput. Surv.


22 • Rathinaraja et al.

dynamically allocating resources for tasks in a fog environment. It considered the features of fog resources to
build a Petri model to improve price and task-inishing time. In [111], the heterogeneous resource demand in a
cloud environment was addressed. The authors proposed a method called skewness avoidance multi-resource
allocation that considers the heterogeneous demand of tasks and allocates resources accordingly. This avoids over-
and under-provisioning of luctuating workloads. An elastic resource management system was proposed in [56]
to dynamically allocate the resources requested on-demand in an edge-cloud environment. The authors employed
a machine learning algorithm to predict the demand for tasks to avoid over/under-provisioning. Similarly, [112]
employed deep learning-based resource allocation by predicting the resource demand for tasks. In contrast, [113]
employed ensemble classiication to predict the workload type for resource allocation, along with auto-scaling to
satisfy workload demands. Beyond a point, scale-up is not possible; consequently, the authors applied the load
balancing technique to move the tasks around the cluster. However, in certain cases, the service plan and VM
type are ixed beforehand.
RQ 7: Is it possible to extend serverless computing to dew/mist/fog computing to minimize unnecessary virtual
resource creation for simple IoT workloads?
Based on the literature discussed above, improving resource utilization is a topic of interest because resources
are scarce in fog and mist environments. In general, even for simple workloads from an IoT environment, the
CSP creates either a VM or container where the tasks are placed for execution. Because serverless computing
can share resources at the job/function-level, the application of serverless computing in addition to resource
allocation for containers/VMs is an interesting area for further research, as there is no signiicant research on
this topic.

4.5 Workload and resource migration


When the allocated resource is insuicient for a task/virtual node and the scale-up of resources is impossible in
the current node, the task is moved to another device (node). This is known as workload migration. In a highly
distributed environment (mist, fog, and CDC), migrating workload (jobs/tasks) and virtual resources (VMs and
containers) involve complex tasks to improve makespan, resource utilization, energy consumption, etc. Therefore,
workload and resource migration require better solutions in such a distributed environment.
Workload migration is performed for two reasons: load balancing and consolidation.
• Workload consolidation [114] - Jobs/tasks from the same or diferent workloads running in multiple virtual
nodes are consolidated into a smaller number of virtual nodes to minimize service costs and communication
delays.
• Workload balancing [115] - Jobs/tasks from heavily loaded virtual nodes are moved to lightly loaded
virtual nodes in the vicinity to improve the makespan and resource utilization. Various workload balancing
techniques for virtual nodes in a cloud environment are discussed in detail in [116] based on diferent
QoS requirements. Similarly, various workload balancing techniques for a cluster of VMs deployed in a
fog environment were systematically reviewed in [117]. Workload balancing is sometimes referred to as
computation oloading and is presented and surveyed in detail in [118]. Ren et al. presented a detailed
study on computation oloading methods for emerging computing paradigms in [16] based on diferent
QoS parameters, such as minimizing energy consumption and delay.
Similar to workload migration, virtual resource migration is also performed for two reasons.
• Virtual resource consolidation - If only a few virtual nodes are operated in multiple PMs, they are consoli-
dated [119] to run in fewer PMs to minimize energy consumption.
• Virtual resource load balancing - Many virtual nodes from a heavily loaded PM are moved to lightly loaded
PMs to improve makespan and resource utilization.

ACM Comput. Surv.


Resource Management in Cloud and Cloud-Influenced Technologies for Internet of Things Applications • 23

Load balancing and consolidation for workloads and virtual resources are signiicantly important for improving
the QoS in the later stages of execution. After a scheduling decision is made, these are the only tasks that can
help improve application performance. However, these tasks are very crucial in mist/fog environments coupled
with a cloud environment. Extensive research has been conducted on the aforementioned four cases. Here, we
briely discuss some of the load-balancing techniques at the platform (cloud, fog, mist) and application-levels,
based on diferent QoS requirements. Lyapunov-based optimization technique was used in [12] to determine the
number of VMs that must be migrated to the mobile edge (mist) nodes to ensure service continuity and save
energy. A fuzzy logic-based task oloading strategy was proposed in [120] using a multi-objective optimization
algorithm and estimation of distribution algorithm to optimize the oloading strategy in a fog environment for
various IoT applications. Similarly, ant colony and particle swarm optimization algorithms were used in [115] to
balance the load during task execution in a fog environment for delay-sensitive IoT applications to minimize
communication cost and response time. Load balancing of energy-aware tasks in a fog environment was proposed
in [121] using the particle swarm optimization technique for smart factory applications. An SDN-based load
balancing technique was proposed in [122] to improve the response time by utilizing the nodes in fog and cloud
environments. Reference [123] proposed a queueing model that considers cloud and fog nodes for executing
oloaded compute-intensive tasks to improve energy eiciency. A summary of these studies is listed in Table 5.
We highlight some potential RQs that require further research in resource consolidation and load balancing when
the cloud service is extended to fog and mist resources.
RQ 8: How can we determine if a node is overloaded?
A node is considered to be overloaded based on diferent cases. Consider ��1 , which operates a set of virtual
nodes. When ��1 is unable to support virtual nodes to maintain the QoS, then ��1 is denoted as overloaded. For
example, as given in [124], if the interference from a co-located VM is high for � �1 in ��1 , then the latency of
the workloads running in � �1 increases. The interference is measured based on the I/O and CPU demands of the
virtual nodes running in ��1 . This scenario also indicates a type of overloading. Hence, overloading generally
afects the QoS parameters mentioned in the SLA. A detailed systematic literature review on VM consolidation is
presented in [125] based on diferent QoS requirements. However, there is no universal threshold for overloading,
and it becomes more complex owing to the heterogeneity in the mist/fog environments and IoT workloads.
Therefore, determining the co-located VM interference for load balancing is not an easy task and requires further
research.
RQ 9: What are the criteria to choose the target node for consolidation?
Determining the target node to host migrated virtual resources involves some extra work. This is because
once a migrated virtual node is placed, it should not be frequently migrated. Therefore, the target node selection
[125] is based on various characteristics, such as the interference from the co-located virtual node, physical
node performance, and migration cost (time and bandwidth), as the deciding factors. Moreover, it is decided
based on the completion time of applications running in the virtual nodes that are migrated and their resource

Table 5. Literature review on Load balancing

Load balancing
Article Environment QoS
Virtual node Workload
[12] ✓ Mist Energy
[120] ✓ Fog Resource utilization
[119] ✓ Cloud Energy
[115] ✓ Fog Communication and response time
[121] ✓ Fog Energy
[122] ✓ Cloud and fog Response time
[123] ✓ Cloud and fog Energy

ACM Comput. Surv.


24 • Rathinaraja et al.

requirements. For example, if the virtual node lifetime is shorter, then there is no beneit from migration, as it
will die sooner. Therefore, estimating the completion time of tasks running in virtual nodes before migration is
an interesting research topic.
RQ 10: How many times can a virtual node be migrated?
Frequent migration of virtual nodes is called virtual node thrashing. Despite adding overhead to the application
performance, especially in a mist/fog/cloud environment, there is no limitation on the number of times a virtual
node can be migrated. However, it should not violate the QoS requirements of applications. For example, in
[126], virtual node migration was performed based on reusability. Therefore, scheduling decisions on resource
migration must be long-term and must avoid virtual node thrashing.
RQ 11: Is it possible to obtain a favorable solution in the tradeof between load balancing and consolidation?
Achieving load balancing and node consolidation simultaneously is a tricky task, as both are contradictory
objectives. However, they are sometimes tied together to target a certain QoS. For example, workload-aware
resource consolidation was proposed in [114] to minimize energy and service costs in cloud and edge computing
for IoT applications. However, it is a perennial problem that demands favorable solutions in a distributed
environment, such as mist, fog, and cloud.

4.6 Energy management


Energy management in mist-fog-cloud environments has attracted considerable research interest, as energy
is an expensive and crucial commodity today. Energy consumption afects the global environment owing to
carbon emissions and non-recyclable computer components. According to [127], the electricity consumption of
data centers is nearly 1% of the global electricity demand, and it contributes up to 0.3% of global �� 2 emissions.
Therefore, energy-eicient servers and networking devices have been designed and manufactured.
Green computing - Green implies łenvironment-friendly.ž Green computing is łthe study and practice of
designing, manufacturing, using, and disposing of computers and their associated subsystems (monitors, printers,
storage, networking devices) eiciently with minimal or no impact on the environment.ž
Green cloud computing [63] - It refers to the environmental beneits of eiciently managing resources in
the cloud and cloud-inluenced platforms [128], such as minimizing the energy consumption of servers by
consolidating virtual servers into a smaller number of physical devices, reducing carbon and gas emissions, and
recycling components.
Resource management algorithms must be suiciently smart [119] while ofering resources as a service without
spending more energy. For instance, scheduling, allocation, load balancing, and server consolidation play major
roles in ofering energy-eicient resource-ofering services. In server consolidation, virtualization helps to run all
virtual nodes in fewer PMs, leaving unused PMs in sleep mode. When turned on, these previously unused PMs
do not consume much power to load the host OS and initialize other processes in the system. To analyze the
performance of IoT applications based on power consumption in the cloud, in [129], the IaaS (cluster of VMs)
hosted on PMs was modeled and the performance based on diferent QoS requirements was measured. Another
possible solution to save power is to run the tasks at a lower speed. However, this negatively increases job latency.
This applies to all devices in mist and fog environments, as discussed in the previous sections. Hence, energy
management in the cloud and cloud-related service platforms has become one of the primary research objectives
of the last decade.
Some signiicant research works on energy management in a cloud-IoT environment are discussed here. A
novel task-scheduling algorithm that relies on energy balancing to extend the life of wireless sensor networks is
proposed in [100]. The algorithm controls the transmission power of terminal devices and uses threshold values
to serve requests in real-time. A power-aware scheduling algorithm for a cloud environment was proposed in

ACM Comput. Surv.


Resource Management in Cloud and Cloud-Influenced Technologies for Internet of Things Applications • 25

[90], which schedules user requests on schedules of small length to minimize the energy consumption of the
underlying resources. Reference [119] presented a comprehensive survey of energy-eicient algorithms in a cloud
environment, diferent categories of virtual server consolidation techniques to minimize energy consumption,
challenges and limitations of each algorithm, and simulation tools for implementing the energy-based concept. In
general, scheduling and resource allocation algorithms play an important role in determining the performance of
applications based on their energy consumption. For instance, in [39], virtual nodes were dynamically deployed
for worklow executions on a cloud platform. A multi-objective optimization algorithm was used in [92], and [130]
for worklow scheduling to minimize the completion time and energy. The authors showed that the proposed
algorithm minimized energy consumption by up to 50% when compared to existing approaches. Sometimes, tasks
are oloaded from fog nodes [123], [128] to minimize energy consumption and hosted in other suitable fog or
cloud nodes. Energy-eicient workload allocation was proposed in [131] by modifying the computation frequency
in cloud and edge devices. This helps to adjust the energy consumption based on the demand of workloads.
Reference [46] performed workload characterization to identify the appropriate combination of coniguration
parameters such that energy consumption is optimized.
RQ 12: Can we schedule tasks layer-by-layer from the dew to the cloud to improve energy eiciency?
From the literature survey, we identiied that limited studies were conducted on energy eiciency in terms
of dew and mist devices, in addition to fog and cloud nodes. So, performing workload characterization and
scheduling IoT workloads layer-by-layer, from the nodes in a dew computing environment to the nodes in the
mist, fog, and cloud could be a potential research direction in this topic of study.

4.7 Heterogeneity
Heterogeneity is the characteristic of containing elements in diferent conigurations. In a cloud, this is inevitable
[65], as devices are upgraded with the latest technology, which causes massive hardware heterogeneity. Moreover,
software heterogeneity is also a factor that afects application performance. Some of the common hardware and
software heterogeneities that are possible in a distributed environment are listed below.
• Processors - CPUs, GPUs, ield programmable gate arrays
• Storage devices - HDD, solid state drive, non-volatile memory express
• Storage architectures - storage area networks, network-attached storage, direct attached storage
• Network - regular or high-speed network, Inini-Band
• Virtualization software - kernel-based VM, Xen, VMware, Docker
In general, a cloud is composed of diferent types of computing, storage, and network components to support a
wide range of workloads and user needs by ofering diferent instance types with suitable pricing. A mix of large-
and small-scale servers supports diferent sizes and types of workloads. This is because most general-purpose
computing is performed using low-coniguration servers. Such heterogeneity in the cloud is inevitable for the
following reasons:
• Node failure in the cloud and cloud-inluenced platforms is relatively frequent and requires immediate
replacement.
• The CSP expands the cloud by adding or upgrading servers to support an increasing number of users and
complex workloads.
For these cases, the CSP prefers the more advanced hardware (CPU, storage, and networking) that is currently
available in the market. Over a period, this gradually leads to substantial heterogeneity that results in uneven
service performance, as older hardware is still used until the end of its life, along with newly added advanced
hardware. Moreover, nodes in a mist and fog environment are heterogeneous by nature [12] because they reside on

ACM Comput. Surv.


26 • Rathinaraja et al.

Fig. 6. Heterogeneity in cloud-IoT environments

the Internet, where devices cannot be expected to be homogeneous. This leads to various-levels of heterogeneity
[124] in cloud-IoT environments, as shown in Figure 6.
• Hardware heterogeneity - PMs in clouds and nodes in a dew/mist/fog environment are conigured in
diferent sizes.
• Virtual node (resource) heterogeneity [78] - The size and coniguration of VMs/containers allocated for IoT
applications are diferent.
• Performance heterogeneity - Interference from co-located virtual nodes and underlying heterogeneous
hardware cause varying performance [19] for the same task. This is also called dynamic performance.
• Workload heterogeneity - Owing to the lexibility of the programming model, IoT workloads vary and can
be conigured to consume diferent resource capacities [53].
• Data heterogeneity - End devices in IoT applications generate diferent types of data (image, sound, video,
and text), especially in healthcare applications [12].
Hardware, resource, and performance heterogeneities are inherently existing in the CDC while ofering IaaS. In
contrast, allocating resources to algorithms (data mining, machine and deep learning) is user-deined due to the
lexibility in the distributed programming model, which causes workload heterogeneity. These algorithms are
also generally ofered as Application-as-a-service (AaaS) or Function-as-a-service (FaaS). Once the workloads are
ready, the jobs/tasks are scheduled for execution by the scheduler at the CDC.
There is considerable research interest in exploiting these heterogeneities, as they are inevitable and have grown
on a massive scale due to the exponential growth in IoT devices for various applications. These heterogeneities
can be exploited only at the time of scheduling, allocation, and load balancing at the resource- and workload-level
to improve performance based on the desired QoS. Reference [132] discussed the heterogeneity in every aspect
of IoT end-to-end application development. In an Internet-based volunteer computing system, resources are
highly heterogeneous, which afects the performance of IoT applications in terms of computing power, data

ACM Comput. Surv.


Resource Management in Cloud and Cloud-Influenced Technologies for Internet of Things Applications • 27

transfer latency, and monetary cost. Reference [133] proposed two task scheduling algorithms called łminimize
computation, communication and violation costsž and łminimize violation costsž to manage the largescale
heterogeneous resources for scheduling tasks and improve the QoS. Based on the performance of the workload on
diferent lavors of VMs, a suitable VM type was assigned for IoT applications in [19]. This minimizes the service
cost while consuming services across an edge-cloud environment. Based on the number of vCPUs required for
applications, [53] developed a performance analysis model that uses a normal distribution for heterogeneous
workloads. In [134], a genetic algorithm was used to achieve optimal load distribution on networked computing
platforms based on the characterized performance for handling large-scale workloads.
To optimize the latency and energy in heterogeneous IoT devices in cloud and fog servers, an application
placement technique was proposed in [85]. The authors used a memetic algorithm to improve QoS using a
weighted-cost model. In contrast, a novel resource-allocation approach was presented in [111] to ofer suitable
resources for the heterogeneous resource requirements of various applications in the IaaS cloud environment.
The authors devised a lexible virtual node ofering method for heterogeneous workloads to exploit the maximum
performance of physical servers. When applications of multiple users are hosted together on a physical server,
they might interfere with each other in terms of resource consumption, resulting in performance degradation.
Therefore, a prediction-based interference-aware workload scheduling was designed in [135] for latency-sensitive
applications to analyze and manage interference in the Xen hypervisor. Co-located VM interference can minimize
the performance of data-intensive applications in virtualized environments. Therefore, heterogeneous perfor-
mance leads to varying job latencies and resource under-utilization. Reference [124] presented a scheduler to
deal with heterogeneous performance owing to co-located VM interference and heterogeneous workloads. In
this study, ive diferent VM lavors were considered for the experimental set-up in a heterogeneous physical
environment to deal with heterogeneous workloads. To handle heterogeneous data by routing on a separate path
based on its type, a novel ive-layered architecture was proposed in [12]. This helps allocate optimal network
resources for speciic data types to minimize latency and power consumption.
RQ 13: Is it possible to exploit the heterogeneous performance of devices in mist and fog environments?
According to the literature, resource heterogeneity is inevitable, especially in devices on the Internet, which
is a platform for mist and fog environments. On the other hand, scheduling tasks that demand considerable
I/O and/or CPU performance is very crucial as their resource requirements are also highly heterogeneous. This
makes resource management tasks even more challenging. To handle such a scenario, an ant colony optimization
was used [136] to identify the right combination of heterogeneous map and reduce tasks, to be executed on a
heterogeneous environment in the cloud. The same approach can be applied in mist and fog environments, in
addition to the workload and system performance characterization. Hence, such heterogeneities can be exploited
with the inherent nature of distributed processing tools, such as Hadoop and Spark, to develop algorithms using
data mining, machine learning, and deep learning, to deal with the volume, velocity, and variety of big data.

5 CLOUD-IOT SIMULATION
Most researchers lack the opportunity to implement their ideas in a real IoT environment because of iscal deicits
and a lack of expertise. Therefore, simulation of IoT, mist, fog, and cloud platforms is key to building a quick
prototype, rather than testing it in a real environment for every model. This also saves a signiicant amount of
time and money and helps in understanding the behavior of diferent approaches. In this section, various cloud
and cloud-IoT simulation tools for building prototypes and analyzing the performance of IoT applications are
briely discussed.
In general, application performance is determined based on QoS parameters, such as the response time, latency,
makespan, throughput, scalability, cost, energy, elasticity, availability, reliability, and security, which are speciied
in the SLA at the time of service registration with a CSP. Data for these QoS parameters are periodically logged

ACM Comput. Surv.


28 • Rathinaraja et al.

for various purposes, such as modeling benchmark functions [137] and analyzing and validating performance.
It also helps deine mathematical models to simulate real IoT application behavior and infer its performance.
Reference [138] proiled diferent virtualized applications using micro-benchmarks and modeled their resource
usage behavior. Performance analysis of stochastic models in hybrid cloud and fog environments is presented in
[139]. Well-deined and generalized benchmark programs are available to generate speciic application log data
for such analyses. These models are available in various simulator toolkits for observing the values of the desired
QoS parameters.
There are several tools to simulate ideas in the cloud and cloud-fog-IoT environments. For the cloud envi-
ronment, [140] comprehensively presented a list (given below) of simulators for cloud and cloud SB modeling,
middleware monitoring, VM provisioning, energy-aware resource provisioning, and application behavior based
on diferent QoS requirements.
• Cloud modeling - CloudSim, GroudSim, CludAnalyst, iCanCloud, cloud2Sim, SimIC, DesktopCloudSim,
iFogSim, CloudSimScale, DFaaSCloud, PericientCloudSim.
• Middleware supervision - SPECI, Crest, CloudSimSDN.
• VM provisioning - DCSim, CloudShed, SimGrid, VMPlaceS, DynamicCloudSim, ATAC4Cloud, Nutshell.
• Energy-aware provisioning - MDCSim, GreenCloud, DCworms, E-MC2, CloudReports, CloudNetSim++,
GDCSim, CloudSimDisk, DISSECT-CF.
• CSB modeling - SimGrid, Bazzar extension.
• Application modeling - NetworkCloudSim, SmartSim, WorklowSim, EMUSIM, CloudExp, GloudSim, CEP-
Sim, BigDataSDNSim, IoTSim-stream.
CloudSim and its related simulation tools are comprehensively described in [141]. Many simulators are available to
simulate fog and IoT environments in addition to clouds, such as ContainerCloudSim, CloudSimSDN, Cooja, Dock-
erSim, EdgeNetworkCloudSim, Edge-Fog Cloud, EdgeCloudSim, EmuFog, FogTorch, FogDirSim, FogNetSim++,
Google IoT Sim, Hector, iFogSim, iFogCloud, iCanCloud, iCloudFog, iFogSimWithDataPlacement, IoTNetSim,
IBM BlueMax, OPNET, Mobile fog, MobFogSim, MobIoTSim, MockFog, MyiFogSim, PureEdgeSim, piFogBed,
PFogSim, RECAP, SatEdgeSim, SimpleIoTSimulator, StormOnEdge/SpanEdge, xFogSim, YAFS. Reference [106]
presented a summary of some of these simulators. Speciically, FogNetSim++ is discussed in detail by comparing it
with other simulators in [142]. Similarly, iFogSim is detailed in [143] for various resource management activities
in IoT applications.

6 CONCLUSION
Cloud and cloud-inluenced technologies, such as fog, mist, and dew computing have become the primary
platforms for various IoT applications. To improve the performance of IoT applications, it is essential to understand
the diferent types of workloads and resource management tasks on these heterogeneous platforms. In this article,
we present a brief description, literature survey, and potential RQs on a wide range of topics, such as workload
modeling and various resource management tasks (provisioning, scheduling, allocation, load balancing, energy
management, and resource heterogeneity). The objective of this study was to help early researchers gain adequate
knowledge of cloud and cloud-inluenced platforms for IoT applications and envision research problems in IoT
from a cloud perspective.

ACKNOWLEDGMENTS
This research was supported by the National Research Foundation of Korea (Grant No. 2020R1A2C101 2196),
and in part by the School of Computer Science and Engineering, Ministry of Education, Kyungpook National
University, South Korea, through the BK21 Four Project, AI-Driven Convergence Software Education Research
Program, under Grant 4199990214394.

ACM Comput. Surv.


Resource Management in Cloud and Cloud-Influenced Technologies for Internet of Things Applications • 29

REFERENCES
[1] Yongrui Qin, Quan Z. Sheng, Nickolas J.G. Falkner, Schahram Dustdar, Hua Wang, and Athanasios V. Vasilakos. 2016. When things matter:
A survey on data-centric internet of things. J. Netw. Comput. Appl. 64, (2016), 137-153. DOI:https://doi.org/10.1016/j.jnca.2015.12.016
[2] Alessio Botta, Walter De Donato, Valerio Persico, and Antonio Pescapé. 2016. Integration of Cloud computing and Internet of Things: A
survey. Futur. Gener. Comput. Syst. 56, (2016), 684-700. DOI:https://doi.org/10.1016/j.future.2015.09.021
[3] Hanan Elazhary. 2019. Internet of Things (IoT), mobile cloud, cloudlet, mobile IoT, IoT cloud, fog, mobile edge, and
edge emerging computing paradigms: Disambiguation and research directions. J. Netw. Comput. Appl. 128, (2019), 105-140.
DOI:https://doi.org/10.1016/j.jnca.2018.10.021
[4] Anand Paul and Rathinaraja Jeyaraj. 2019. Internet of Things: A primer. Hum. Behav. Emerg. Technol. 1, 1 (2019), 37-47.
DOI:https://doi.org/10.1002/hbe2.133
[5] Hongming Cai, Boyi Xu, Lihong Jiang, and Athanasios V. Vasilakos. 2017. IoT-Based Big Data Storage Systems in Cloud Computing:
Perspectives and Challenges. IEEE Internet Things J. 4, 1 (2017), 75-87. DOI:https://doi.org/10.1109/JIOT.2016.2619369
[6] Azure IoT Hub. October 2022. Retrieved from https://azure.microsoft.com/en-us/services/iot-hub/#overview
[7] Amazon IoT. October 2022. Retrieved from https://aws.amazon.com/iot/
[8] Cisco IoT. October 2022. Retrieved from https://www.cisco.com/c/en/us/solutions/internet-of-things/iot-control-center.html
[9] IBM IoT. October 2022. Retrieved from https://internetofthings.ibmcloud.com/
[10] Google IoT. October 2022. Retrieved from https://cloud.google.com/solutions/iot
[11] Karrar Hameed Abdulkareem, Mazin Abed Mohammed, Saraswathy Shamini Gunasekaran, Mohammed Nasser Al-Mhiqani, Ammar Awad
Mutlag, Salama A. Mostafa, Nabeel Salih Ali, and Dheyaa Ahmed Ibrahim. 2019. A review of fog computing and machine learning: Concepts,
applications, challenges, and open issues. IEEE Access 7, (2019), 153123-153140. DOI:https://doi.org/10.1109/ACCESS.2019.2947542
[12] Md Asif-Ur-Rahman, Fariha Afsana, Mufti Mahmud, M. Shamim Kaiser, Muhammad R. Ahmed, Omprakash Kaiwartya, and Anne
James-Taylor. 2019. Toward a heterogeneous mist, fog, and cloud-based framework for the internet of healthcare things. IEEE Internet
Things J. 6, 3 (2019), 4049-4062. DOI:https://doi.org/10.1109/JIOT.2018.2876088
[13] Yaqiong Liu, Mugen Peng, Guochu Shou, Yudong Chen, and Siyu Chen. 2020. Toward Edge Intelligence: Multiaccess Edge Computing
for 5G and Internet of Things. IEEE Internet Things J. 7, 8 (2020), 6722-6747. DOI:https://doi.org/10.1109/JIOT.2020.3004500
[14] Mohammed Laroui, Boubakr Nour, Hassine Moungla, Moussa A. Cherif, Hossam Aii, and Mohsen Guizani. 2021. Edge and
fog computing for IoT: A survey on current research activities & future directions. Comput. Commun. 180, (2021), 210-231.
DOI:https://doi.org/10.1016/j.comcom.2021.09.003
[15] Partha Pratim Ray. 2017. An Introduction to Dew Computing: Deinition, Concept and Implications. IEEE Access 6, (2017), 723-737.
DOI:https://doi.org/10.1109/ACCESS.2017.2775042
[16] J. Ren, D. Zhang, S. He, Y. Zhang, and T. Li. 2019. A survey on end-edge-cloud orchestrated network computing paradigms: Transparent
computing, mobile edge computing, fog computing, and cloudlet. ACM Comput. Surv., vol. 52, no. 6, (2019). doi: 10.1145/3362031.
[17] D. R. Vasconcelos, R. M.C. Andrade, V. Severino, and J. N. De Souza. 2019. Cloud, Fog, or Mist in IoT? That is the qestion. ACM Trans.
Internet Technol. 19, 2 (2019). DOI:https://doi.org/10.1145/3309709
[18] Nam Yong Kim, Jung Hyun Ryu, Byoung Wook Kwon, Yi Pan, and Jong Hyuk Park. 2018. CF-CloudOrch: container fog node-based
cloud orchestration for IoT networks. J. Supercomput. 74, 12 (2018), 7024-7045. DOI:https://doi.org/10.1007/s11227-018-2493-4
[19] Yajing Xu, Junnan Li, Zhihui Lu, Jie Wu, Patrick C.K. Hung, and Abdulhameed Alelaiwi. 2020. ARVMEC: Adaptive Rec-
ommendation of Virtual Machines for IoT in Edge-Cloud Environment. J. Parallel Distrib. Comput. 141, (2020), 23-34.
DOI:https://doi.org/10.1016/j.jpdc.2020.03.006
[20] G. A. S. Cassel, V. F. Rodrigues, R. da Rosa Righi, M. R. Bez, A. C. Nepomuceno, and C. André da Costa. 2022. Serverless computing for
Internet of Things: A systematic literature review. Futur. Gener. Comput. Syst., vol. 128, (2022), 299-316 . DOI: 10.1016/j.future.2021.10.020.
[21] B. Jennings and R. Stadler. 2015. Resource Management in Clouds: Survey and Research Challenges. J. Netw. Syst. Manag., vol. 23, no. 3,
(2015) 567-619. doi: 10.1007/s10922-014-9307-7.
[22] Maggi Bansal, Inderveer Chana, and Siobhán Clarke. 2021. A Survey on IoT Big Data: Current Status, 13 V’s Challenges, and Future
Directions. ACM Comput. Surv. 53, 6 (2021). DOI:https://doi.org/10.1145/3419634
[23] Xiang Fei, Nazaraf Shah, Nandor Verba, Kuo Ming Chao, Victor Sanchez-Anguix, Jacek Lewandowski, Anne James, and Zahid Usman.
2019. CPS data streams analytics based on machine learning for Cloud and Fog Computing: A survey. Futur. Gener. Comput. Syst. 90,
(2019), 435-450. DOI:https://doi.org/10.1016/j.future.2018.06.042
[24] S. K. Lo, Q. Lu, C. Wang, H. Y. Paik, and L. Zhu. 2021. A Systematic Literature Review on Federated Machine Learning: From a Sofware
Engineering Perspective. ACM Comput. Surv., vol. 54, no. 5, (2021). DOI: 10.1145/3450288.
[25] J. Zhang et al. 2022. Edge Learning: The Enabling Technology for Distributed Big Data Analytics in the Edge. ACM Comput. Surv., vol.
54, no. 7, (2022). DOI: 10.1145/3464419.
[26] Safa Ben Atitallah, Maha Driss, Wadii Boulila, and Henda Ben Ghezala. 2020. Leveraging Deep Learning and IoT big
data analytics to support the smart cities development: Review and future directions. Comput. Sci. Rev. 38, (2020).

ACM Comput. Surv.


30 • Rathinaraja et al.

DOI:https://doi.org/10.1016/j.cosrev.2020.100303
[27] Sven Groppe. 2020. Emergent models, frameworks, and hardware technologies for Big data analytics. J. Supercomput. 76, 3 (2020),
1800-1827. DOI:https://doi.org/10.1007/s11227-018-2277-x
[28] Hadoop. October 2022. Retrieved from https://hadoop.apache.org/
[29] Spark. October 2022. Retrieved from https://spark.apache.org/
[30] Storm. October 2022. Retrieved from https://storm.apache.org/
[31] Kafka. October 2022. Retrieved from https://kafka.apache.org/
[32] Yasir Arfat, Sardar Usman, Rashid Mehmood, and Iyad Katib. 2020. Big data for smart infrastructure design: Opportunities and challenges.
EAI/Springer Innov. Commun. Comput. (2020), 491-518. DOI:https://doi.org/10.1007/978-3-030-13705-2_20
[33] Muhammad H. Hilman, Maria A. Rodriguez, and Rajkumar Buyya. 2020. Multiple worklows scheduling in multi-tenant distributed
systems: A taxonomy and future directions. ACM Comput. Surv. 53, 1 (2020). DOI:https://doi.org/10.1145/3368036
[34] T. Ben-Nun and T. Hoeler. 2020. Demystifying Parallel and Distributed Deep Learning. ACM Comput. Surv., vol. 52, no. 4, (2020), 1-43.
doi: 10.1145/3320060.
[35] R. Kang, A. Guo, G. Laput, Y. Li, and X. A. Chen. 2019. Minuet: Multimodal interaction with an internet of things. Proc. SUI. ACM Conf.
Spat. User Interact., (2019). doi: 10.1145/3357251.3357581.
[36] Redowan Mahmud, Kotagiri Ramamohanarao, and Rajkumar Buyya. 2020. Application Management in Fog Computing Environments:
A Taxonomy, Review and Future Directions. ACM Comput. Surv. 53, 4 (2020). DOI:https://doi.org/10.1145/3403955
[37] Mainak Adhikari, Tarachand Amgoth, and Satish Narayana Srirama. 2019. A Survey on Scheduling Strategies for Worklows in Cloud.
Int. Conf. Netw. Inf. Syst. Comput. 52, 4 (2019).
[38] Muhammad H. Hilman, Maria A. Rodriguez, and Rajkumar Buyya. 2021. Worklow-as-a-Service Cloud Platform and Deployment of Bioin-
formatics Worklow Applications. Knowl. Manag. Dev. Data-Intensive Syst. (2021), 205-226. DOI:https://doi.org/10.1201/9781003001188-14
[39] Xiaolong Xu, Wanchun Dou, Xuyun Zhang, and Jinjun Chen. 2016. EnReal: An Energy-Aware Resource Allocation
Method for Scientiic Worklow Executions in Cloud Environment. IEEE Trans. Cloud Comput. 4, 2 (2016), 166-179.
DOI:https://doi.org/10.1109/TCC.2015.2453966
[40] Abdulsalam Yassine, Shailendra Singh, M. Shamim Hossain, and Ghulam Muhammad. 2019. IoT big data analytics for smart homes with
fog and cloud computing. Futur. Gener. Comput. Syst. 91, (2019), 563-573. DOI:https://doi.org/10.1016/j.future.2018.08.040
[41] Daniele Tessera Maria Carla Calzarossa, Luisa Massari. 2016. Workload Characterization: A Survey Revisited. ACM Comput. Surv. 48, 3
(2016), 1-43. DOI:https://doi.org/10.1145/2856127
[42] Anil Kashyap. 2018. Workload characterization for enterprise disk drives. ACM Trans. Storage 14, 2 (2018).
DOI:https://doi.org/10.1145/3151847
[43] Hani Nemati, Seyed Vahid Azhari, Mahsa Shakeri, and Michel Dagenais. 2021. Host-Based Virtual Machine Workload Characterization
Using Hypervisor Trace Mining. ACM Trans. Model. Perform. Eval. Comput. Syst. 6, 1 (2021). DOI:https://doi.org/10.1145/3460197
[44] Y. Wen, G. Cheng, S. Deng, and J. Yin. 2022. Characterizing and synthesizing the worklow structure of microservices in ByteDance
Cloud. J. Softw. Evol. Process, (2022), 1-18. doi: 10.1002/smr.2467.
[45] Maria Malik, Katayoun Neshatpour, Setareh Rafatirad, and Houman Homayoun. 2018. Hadoop workloads characterization for
performance and energy eiciency optimizations on microservers. IEEE Trans. Multi-Scale Comput. Syst. 4, 3 (2018), 355-368.
DOI:https://doi.org/10.1109/TMSCS.2017.2749228
[46] Zhibin Yu, Wen Xiong, Lieven Eeckhout, Zhendong Bei, Avi Mendelson, and Chengzhong Xu. 2018. MIA: Metric im-
portance analysis for big data workload characterization. IEEE Trans. Parallel Distrib. Syst. 29, 6 (2018), 1371-1384.
DOI:https://doi.org/10.1109/TPDS.2017.2758781
[47] Bumjoon Seo, Sooyong Kang, Jongmoo Choi, Jaehyuk Cha, Youjip Won, and Sungroh Yoon. 2014. IO workload characterization revisited:
A data-mining approach. IEEE Trans. Comput. 63, 12 (2014), 3026-3038. DOI:https://doi.org/10.1109/TC.2013.187
[48] A. Mahgoub et al. 2022. WiseFuse: Workload Characterization and DAG Transformation for Serverless Worklows. Proc. ACM Meas.
Anal. Comput. Syst., vol. 6, no. 2, (2022), 1-28. doi: 10.1145/3530892.
[49] Ismael Solis Moreno, Peter Garraghan, Paul Townend, and Jie Xu. 2014. Analysis, Modeling and Simulation of Workload Patterns in a
Large-Scale Utility Cloud. IEEE Trans. Cloud Comput. 2, 2 (2014), 208-221. DOI:https://doi.org/10.1109/tcc.2014.2314661
[50] Blesson Varghese, Ozgur Akgun, Ian Miguel, Long Thai, and Adam Barker. 2019. Cloud Benchmarking for Maximising Performance of
Scientiic Applications. IEEE Trans. Cloud Comput. 7, 1 (2019), 170-182. DOI:https://doi.org/10.1109/TCC.2016.2603476
[51] Klervie Toczé, Johan Lindqvist, and Simin Nadjm-Tehrani. 2020. Characterization and modeling of an edge computing mixed reality
workload. J. Cloud Comput. 9, 1 (2020). DOI:https://doi.org/10.1186/s13677-020-00190-x
[52] Kai Hwang, Xiaoying Bai, Yue Shi, Muyang Li, Wen Guang Chen, and Yongwei Wu. 2016. Cloud Performance Mod-
eling with Benchmark Evaluation of Elastic Scaling Strategies. IEEE Trans. Parallel Distrib. Syst. 27, 1 (2016), 130-143.
DOI:https://doi.org/10.1109/TPDS.2015.2398438
[53] Xiaolin Chang, Ruofan Xia, Jogesh K. Muppala, Kishor S. Trivedi, and Jiqiang Liu. 2018. Efective modeling approach for
iaas data center performance analysis under heterogeneous workload. IEEE Trans. Cloud Comput. 6, 4 (2018), 991-1003.

ACM Comput. Surv.


Resource Management in Cloud and Cloud-Influenced Technologies for Internet of Things Applications • 31

DOI:https://doi.org/10.1109/TCC.2016.2560158
[54] Hosein Mohamamdi Makrani, Hossein Sayadi, Najmeh Nazari, Sai Mnoj Pudukotai Dinakarrao, Avesta Sasan, Tinoosh Mohsenin,
Setareh Rafatirad, and Houman Homayoun. 2021. Adaptive Performance Modeling of Data-intensive Workloads for Resource Provisioning
in Virtualized Environment. ACM Trans. Model. Perform. Eval. Comput. Syst. 5, 4 (2021). DOI:https://doi.org/10.1145/3442696
[55] Jun Zhou, Bowei Cen, Zexiang Cai, Yuanju Chen, Yuyan Sun, Hongli Xue, and Weiha O. Tan. 2021. Workload Modeling for Microservice-
Based Edge Computing in Power Internet of Things. IEEE Access 9, (2021), 76205-76212. DOI:https://doi.org/10.1109/ACCESS.2021.3081705
[56] Boyun Liu, Jingjing Guo, Chunlin Li, and Youlong Luo. 2020. Workload forecasting based elastic resource management in edge cloud.
Comput. Ind. Eng. 139, (2020). DOI:https://doi.org/10.1016/j.cie.2019.106136
[57] S. Narasimha Swamy and Solomon Raju Kota. 2020. An empirical study on system level aspects of Internet of Things (IoT). IEEE Access
8, (2020), 188082-188134. DOI:https://doi.org/10.1109/ACCESS.2020.3029847
[58] Cheol-Ho Hong and Blesson Varghese. 2019. Resource Management in Fog/Edge Computing. ACM Comput. Surv. 52, 5 (2019), 1-37.
DOI:https://doi.org/10.1145/3326066
[59] Mostafa Ghobaei-Arani, Alireza Souri, and Ali A. Rahmanian. 2020. Resource Management Approaches in Fog Computing: a Compre-
hensive Review. J. Grid Comput. 18, 1 (2020), 1-42. DOI:https://doi.org/10.1007/s10723-019-09491-1
[60] Ola Salman, Imad Elhajj, Ali Chehab, and Ayman Kayssi. 2018. IoT survey: An SDN and fog computing perspective. Comput. Networks
143, (2018), 221-246. DOI:https://doi.org/10.1016/j.comnet.2018.07.020
[61] Nguyen Dinh Nguyen, Linh An Phan, Dae Heon Park, Sehan Kim, and Taehong Kim. 2020. ElasticFog: Elastic resource provisioning in
container-based fog computing. IEEE Access 8, (2020), 183879-183890. DOI:https://doi.org/10.1109/ACCESS.2020.3029583
[62] Tian Wang, Yuzhu Liang, Weijia Jia, Muhammad Arif, Anfeng Liu, and Mande Xie. 2019. Coupling resource management based on fog
computing in smart city systems. J. Netw. Comput. Appl. 135, (2019), 11-19. DOI:https://doi.org/10.1016/j.jnca.2019.02.021
[63] Xiang Sun, Nirwan Ansari, and Ruopeng Wang. 2016. Optimizing Resource Utilization of a Data Center. IEEE Commun. Surv. Tutorials
18, 4 (2016), 2822-2846. DOI:https://doi.org/10.1109/COMST.2016.2558203
[64] Misbah Liaqat, Victor Chang, Abdullah Gani, Siti Haizah Ab Hamid, Muhammad Toseef, Umar Shoaib, and Rana Li-
aqat Ali. 2017. Federated cloud resource management: Review and discussion. J. Netw. Comput. Appl. 77, (2017), 87-105.
DOI:https://doi.org/10.1016/j.jnca.2016.10.008
[65] Juliana Oliveira de Carvalho, Fernando Trinta, Dario Vieira, and Omar Andres Carmona Cortes. 2018. Evolutionary solutions for
resources management in multiple clouds: State-of-the-art and future directions. Futur. Gener. Comput. Syst. 88, (2018), 284-296.
DOI:https://doi.org/10.1016/j.future.2018.05.087
[66] K. Hemant Kumar Reddy, Ranjit Kumar Behera, Alok Chakrabarty, and Diptendu Sinha Roy. 2020. A Service Delay Minimiza-
tion Scheme for QoS-Constrained, Context-Aware Uniied IoT Applications. IEEE Internet Things J. 7, 10 (2020), 10527-10534.
DOI:https://doi.org/10.1109/JIOT.2020.2999658
[67] Giovanni Merlino, Rustem Dautov, Salvatore Distefano, and Dario Bruneo. 2019. Enabling Workload Engineering in Edge, Fog, and
Cloud Computing through OpenStack-based Middleware. ACM Trans. Internet Technol. 19, 2 (2019). DOI:https://doi.org/10.1145/3309705
[68] Enis Afgan, Andrew Lonie, James Taylor, and Nuwan Goonasekera. 2019. CloudLaunch: Discover and deploy cloud applications. Futur.
Gener. Comput. Syst. 94, (2019), 802-810. DOI:https://doi.org/10.1016/j.future.2018.04.037
[69] Azure resource type. October 2022. Retrieved from https://docs.microsoft.com/en-us/azure/virtual-machines/sizes
[70] Azure service plan. October 2022. Retrieved from https://azure.microsoft.com/en-us/pricing/details/virtual-machines/windows/
[71] P. Ta-Shma, A. Akbar, G. Gerson-Golan, G. Hadash, F. Carrez, and K. Moessner. 2018. An Ingestion and Analytics Architecture for IoT
Applied to Smart City Use Cases. IEEE Internet Things J., vol. 5, no. 2, (2018), 765-774. DOI: 10.1109/JIOT.2017.2722378.
[72] Alessandro Bocci, Stefano Forti, Gian Luigi Ferrari, and Antonio Brogi. 2021. Secure FaaS orchestration in the fog: how far are we?
Computing 103, 5 (2021), 1025-1056. DOI:https://doi.org/10.1007/s00607-021-00924-y
[73] L. Lin, L. Pan, and S. Liu. 2020. Backup or Not: An Online Cost Optimal Algorithm for Data Analysis Jobs Using Spot Instances. IEEE
Access, vol. 8, (2020), 144945-144956. DOI: 10.1109/ACCESS.2020.3014978.
[74] Spot instances in AWS. October 2022. https://aws.amazon.com/blogs/compute/running-high-scale-web-on-spot-instances/
[75] Jitendra Kumar and Ashutosh Kumar Singh. 2021. Performance evaluation of metaheuristics algorithms for workload prediction in
cloud environment. Appl. Soft Comput. 113, (2021), 107895. DOI:https://doi.org/10.1016/j.asoc.2021.107895
[76] Masoumeh Etemadi, Mostafa Ghobaei-Arani, and Ali Shahidinejad. 2020. Resource provisioning for IoT services in the fog computing
environment: An autonomic approach. Comput. Commun. 161, March (2020), 109-131. DOI:https://doi.org/10.1016/j.comcom.2020.07.028
[77] Maryam Amiri, Leyli Mohammad-Khanli, and Rafaela Mirandola. 2018. A sequential pattern mining model for application workload
prediction in cloud environment. J. Netw. Comput. Appl. 105, (2018), 21-62. DOI:https://doi.org/10.1016/j.jnca.2017.12.015
[78] Ilia Pietri and Rizos Sakellariou. 2016. Mapping virtual machines onto physical machines in cloud computing: A survey. ACM Comput.
Surv. 49, 3 (2016). DOI:https://doi.org/10.1145/2983575
[79] Shvan Omer, Sadoon Azizi, Mohammad Shojafar, and Rahim Tafazolli. 2021. A priority, power and traic-aware virtual machine
placement of IoT applications in cloud data centers. J. Syst. Archit. 115, (2021). DOI:https://doi.org/10.1016/j.sysarc.2021.101996

ACM Comput. Surv.


32 • Rathinaraja et al.

[80] Guangyao Zhou, Wenhong Tian, and Rajkumar Buyya. 2021. Deep Reinforcement Learning-based Methods for Resource Scheduling in
Cloud Computing: A Review and Future Directions. Association for Computing Machinery. Retrieved from http://arxiv.org/abs/2105.04086
[81] Syed Hamid Hussain Madni, Muhammad Shaie Abd Latif, Yahaya Coulibaly, and Shai’i Muhammad Abdulhamid. 2016. Resource
scheduling for infrastructure as a service (IaaS) in cloud computing: Challenges and opportunities. J. Netw. Comput. Appl. 68, (2016),
173-200. DOI:https://doi.org/10.1016/j.jnca.2016.04.016
[82] Mohammad Masdari, Sima ValiKardan, Zahra Shahi, and Sonay Imani Azar. 2016. Towards worklow scheduling in cloud computing: A
comprehensive analysis. J. Netw. Comput. Appl. 66, (2016), 64-82. DOI:https://doi.org/10.1016/j.jnca.2016.01.018
[83] Heena Wadhwa and Rajni Aron. 2021. TRAM: Technique for resource allocation and management in fog computing environment. J.
Supercomput. (2021). DOI:https://doi.org/10.1007/s11227-021-03885-3
[84] R. Madhura, B. Lydia Elizabeth, and V. Rhymend Uthariaraj. 2021. An improved list-based task scheduling algorithm for fog computing
environment. Springer Vienna. DOI:https://doi.org/10.1007/s00607-021-00935-9
[85] Mohammad Goudarzi, Huaming Wu, Marimuthu Palaniswami, and Rajkumar Buyya. 2021. An Application Placement Technique
for Concurrent IoT Applications in Edge and Fog Computing Environments. IEEE Trans. Mob. Comput. 20, 4 (2021), 1298-1311.
DOI:https://doi.org/10.1109/TMC.2020.2967041
[86] Randa M. Abdelmoneem, Abderrahim Benslimane, and Eman Shaaban. 2020. Mobility-aware task scheduling in cloud-Fog IoT-based
healthcare architectures. Comput. Networks 179, (2020), 107348. DOI:https://doi.org/10.1016/j.comnet.2020.107348
[87] Jiuyun Xu, Zhuangyuan Hao, Ruru Zhang, and Xiaoting Sun. 2019. A Method Based on the Combination of Laxity and Ant Colony
System for Cloud-Fog Task Scheduling. IEEE Access 7, (2019), 116218-116226. DOI:https://doi.org/10.1109/ACCESS.2019.2936116
[88] Hina Raique, Munam Ali Shah, Saif Ul Islam, Tahir Maqsood, Suleman Khan, and Carsten Maple. 2019. A Novel Bio-
Inspired Hybrid Algorithm (NBIHA) for Eicient Resource Management in Fog Computing. IEEE Access 7, (2019), 115760-115773.
DOI:https://doi.org/10.1109/ACCESS.2019.2924958
[89] Mohamed Abd Elaziz, Laith Abualigah, and Ibrahim Attiya. 2021. Advanced optimization technique for scheduling IoT tasks in cloud-fog
computing environments. Futur. Gener. Comput. Syst. 124, (2021), 142-154. DOI:https://doi.org/10.1016/j.future.2021.05.026
[90] Minhaj Ahmad Khan. 2021. A cost-efective power-aware approach for scheduling cloudlets in cloud computing environments. J.
Supercomput. (2021). DOI:https://doi.org/10.1007/s11227-021-03894-2
[91] Huaiying Sun, Huiqun Yu, and Guisheng Fan. 2020. Contract-Based Resource Sharing for Time Efective Task Scheduling in Fog-Cloud
Environment. IEEE Trans. Netw. Serv. Manag. 17, 2 (2020), 1040-1053. DOI:https://doi.org/10.1109/TNSM.2020.2977843
[92] Samia Ijaz, Ehsan Ullah Munir, Saima Gulzar Ahmad, M. Mustafa Raique, and Omer F. Rana. 2021. Energy-makespan optimization of
worklow scheduling in fog-cloud computing. Computing 103, 9 (2021), 2033-2059. DOI:https://doi.org/10.1007/s00607-021-00930-0
[93] Sarhad Arisdakessian, Omar Abdel Wahab, Azzam Mourad, Hadi Otrok, and Nadjia Kara. 2020. FoGMatch: An Intelli-
gent Multi-Criteria IoT-Fog Scheduling Approach Using Game Theory. IEEE/ACM Trans. Netw. 28, 4 (2020), 1779-1789.
DOI:https://doi.org/10.1109/TNET.2020.2994015
[94] Laith Abualigah, Ali Diabat, and Mohamed Abd Elaziz. 2021. Intelligent worklow scheduling for Big Data applications in IoT cloud
computing environments. Cluster Comput. 24, 4 (2021), 2957-2976. DOI:https://doi.org/10.1007/s10586-021-03291-7
[95] Khawar Hasham, Kamran Munir, and Richard McClatchey. 2018. Cloud infrastructure provenance collection and management to
reproduce scientiic worklows execution. Futur. Gener. Comput. Syst. 86, (2018), 799-820. DOI:https://doi.org/10.1016/j.future.2017.07.015
[96] Vincenzo De Maio and Dragi Kimovski. 2020. Multi-objective scheduling of extreme data scientiic worklows in Fog. Futur. Gener.
Comput. Syst. 106, (2020), 171-184. DOI:https://doi.org/10.1016/j.future.2019.12.054
[97] Raafat O. Aburukba, Mazin AliKarrar, Taha Landolsi, and Khaled El-Fakih. 2020. Scheduling Internet of Things requests to minimize
latency in hybrid Fog-Cloud computing. Futur. Gener. Comput. Syst. 111, (2020), 539-551. DOI:https://doi.org/10.1016/j.future.2019.09.039
[98] Yi Wei, Li Pan, Shijun Liu, Lei Wu, and Xiangxu Meng. 2018. DRL-Scheduling: An intelligent QoS-Aware job scheduling framework for
applications in clouds. IEEE Access 6, (2018), 55112-55125. DOI:https://doi.org/10.1109/ACCESS.2018.2872674
[99] Samodha Pallewatta, Vassilis Kostakos, and Rajkumar Buyya. 2019. Microservices-based IoT application placement within heterogeneous
and resource constrained fog computing environments. UCC 2019 - Proc. 12th IEEE/ACM Int. Conf. Util. Cloud Comput. (2019), 71-81.
DOI:https://doi.org/10.1145/3344341.3368800
[100] Juan Luo, Luxiu Yin, Jinyu Hu, Chun Wang, Xuan Liu, Xin Fan, and Haibo Luo. 2019. Container-based fog comput-
ing architecture and energy-balancing scheduling algorithm for energy IoT. Futur. Gener. Comput. Syst. 97, (2019), 50-60.
DOI:https://doi.org/10.1016/j.future.2018.12.063
[101] Chu Ge Wu, Wei Li, Ling Wang, and Albert Y. Zomaya. 2021. Hybrid evolutionary scheduling for energy-eicient fog-enhanced internet
of things. IEEE Trans. Cloud Comput. 9, 2 (2021), 641-653. DOI:https://doi.org/10.1109/TCC.2018.2889482
[102] Djabir Abdeldjalil Chekired, Lyes Khoukhi, and Hussein T. Mouftah. 2018. Industrial IoT Data Scheduling Based on
Hierarchical Fog Computing: A Key for Enabling Smart Factory. IEEE Trans. Ind. Informatics 14, 10 (2018), 4590-4602.
DOI:https://doi.org/10.1109/TII.2018.2843802
[103] Ahmed A.A. Gad-Elrab and Amin Y. Noaman. 2020. A two-tier bipartite graph task allocation approach based on fuzzy clustering in
cloud-fog environment. Futur. Gener. Comput. Syst. 103, (2020), 79-90. DOI:https://doi.org/10.1016/j.future.2019.10.003

ACM Comput. Surv.


Resource Management in Cloud and Cloud-Influenced Technologies for Internet of Things Applications • 33

[104] Zahra Ghanbari, Nima Jafari Navimipour, Mehdi Hosseinzadeh, and Aso Darwesh. 2019. Resource allocation mechanisms and approaches
on the Internet of Things. Cluster Comput. 22, 4 (2019), 1253-1282. DOI:https://doi.org/10.1007/s10586-019-02910-8
[105] Akos Recse, Robert Szabo, and Balazs Nemeth. 2020. Elastic resource management and network slicing for IoT over edge clouds.
PervasiveHealth Pervasive Comput. Technol. Healthc. (2020). DOI:https://doi.org/10.1145/3410992.3411015
[106] Tariq Qayyum, Zouheir Trabelsi, Asad Waqar Malik, and Kadhim Hayawi. 2021. Multi-Level Resource Sharing Framework Using
Collaborative Fog Environment for Smart Cities. IEEE Access 9, (2021), 21859-21869. DOI:https://doi.org/10.1109/ACCESS.2021.3054420
[107] Duong Tung Nguyen, Long Bao Le, and Vijay K. Bhargava. 2019. A Market-Based Framework for Multi-Resource Allocation in Fog
Computing. IEEE/ACM Trans. Netw. 27, 3 (2019), 1151-1164. DOI:https://doi.org/10.1109/TNET.2019.2912077
[108] Ranesh Kumar Naha, Saurabh Garg, Andrew Chan, and Sudheer Kumar Battula. 2020. Deadline-based dynamic re-
source allocation and provisioning algorithms in Fog-Cloud environment. Futur. Gener. Comput. Syst. 104, (2020), 131-141.
DOI:https://doi.org/10.1016/j.future.2019.10.018
[109] Wiem Matoussi and Tarek Hamrouni. 2021. A new temporal locality-based workload prediction approach for SaaS services in a cloud
environment. J. King Saud Univ. - Comput. Inf. Sci. xxxx (2021). DOI:https://doi.org/10.1016/j.jksuci.2021.04.008
[110] Lina Ni, Jinquan Zhang, and Jiguo Yu. 2018. Priced timed petri nets based resource allocation strategy for fog computing. Proc. - 2016
Int. Conf. Identiication, Inf. Knowl. Internet Things, IIKI 2016 2018-Janua, 5 (2018), 39-44. DOI:https://doi.org/10.1109/IIKI.2016.87
[111] Lei Wei, Chuan Heng Foh, Bingsheng He, and Jianfei Cai. 2018. Towards Eicient Resource Allocation for Heterogeneous Workloads in
IaaS Clouds. IEEE Trans. Cloud Comput. 6, 1 (2018), 264-275. DOI:https://doi.org/10.1109/TCC.2015.2481400
[112] Jing Bi, Shuang Li, Haitao Yuan, and Meng Chu Zhou. 2021. Integrated deep learning method for workload and resource prediction in
cloud systems. Neurocomputing 424, (2021), 35-48. DOI:https://doi.org/10.1016/j.neucom.2020.11.011
[113] Mirza Abdur Razzaq, Javed Ahmed Mahar, Muneer Ahmad, Najia Saher, Arif Mehmood, and Gyu Sang Choi. 2021. Hybrid Auto-
Scaled Service-Cloud-Based Predictive Workload Modeling and Analysis for Smart Campus System. IEEE Access 9, (2021), 42081-42089.
DOI:https://doi.org/10.1109/ACCESS.2021.3065597
[114] Irfan Mohiuddin and Ahmad Almogren. 2019. Workload aware VM consolidation method in edge/cloud computing for IoT applications.
J. Parallel Distrib. Comput. 123, (2019), 204-214. DOI:https://doi.org/10.1016/j.jpdc.2018.09.011
[115] Mohamed K. Hussein and Mohamed H. Mousa. 2020. Eicient task oloading for IoT-Based applications in fog computing using ant
colony optimization. IEEE Access 8, (2020), 37191-37201. DOI:https://doi.org/10.1109/ACCESS.2020.2975741
[116] Pawan Kumar and Rakesh Kumar. 2019. Issues and challenges of load balancing techniques in cloud computing: A survey. ACM
Comput. Surv. 51, 6 (2019). DOI:https://doi.org/10.1145/3281010
[117] Mandeep Kaur and Rajni Aron. 2021. A systematic study of load balancing approaches in the fog computing environment. J. Supercomput.
77, 8 (2021), 9202-9247. DOI:https://doi.org/10.1007/s11227-020-03600-8
[118] Kaouther Gasmi, Selma Dilek, Suleyman Tosun, and Suat Ozdemir. 2021. A survey on computation oloading and service placement in
fog computing-based IoT. Springer US. DOI:https://doi.org/10.1007/s11227-021-03941-y
[119] Nisha Chaurasia, Mohit Kumar, Rashmi Chaudhry, and Om Prakash Verma. 2021. Comprehensive survey on energy-aware server
consolidation techniques in cloud computing. J. Supercomput. 77, 10 (2021), 11682-11737. DOI:https://doi.org/10.1007/s11227-021-03760-1
[120] Chu ge Wu, Wei Li, Ling Wang, and Albert Y. Zomaya. 2021. An evolutionary fuzzy scheduler for multi-objective resource allocation in
fog computing. Futur. Gener. Comput. Syst. 117, (2021), 498-509. DOI:https://doi.org/10.1016/j.future.2020.12.019
[121] Jiafu Wan, Baotong Chen, Shiyong Wang, Min Xia, Di Li, and Chengliang Liu. 2018. Fog Computing for Energy-Aware Load Balancing
and Scheduling in Smart Factory. IEEE Trans. Ind. Informatics 14, 10 (2018), 4548-4556. DOI:https://doi.org/10.1109/TII.2018.2818932
[122] Ernando Batista, Gustavo Figueiredo, and Cassio Prazeres. 2021. Load Balancing between Fog and Cloud in Fog of Things Based Platforms
through Software-Deined Networking. J. King Saud Univ. - Comput. Inf. Sci. xxxx (2021). DOI:https://doi.org/10.1016/j.jksuci.2021.10.003
[123] Om Kolsoom Shahryari, Hossein Pedram, Vahid Khajehvand, and Mehdi Dehghan TakhtFooladi. 2020. Energy-Eicient
and delay-guaranteed computation oloading for fog-based IoT networks. Comput. Networks 182, August (2020).
DOI:https://doi.org/10.1016/j.comnet.2020.107511
[124] Rathinaraja Jeyaraj, V. S. Ananthanarayana, and Anand Paul. 2020. Improving MapReduce scheduler for heterogeneous workloads in a
heterogeneous environment. Concurr. Comput. Pract. Exp. 32, 7 (2020). DOI:https://doi.org/10.1002/cpe.5558
[125] A. H. T. Dias, L. H. A. Correia, and N. Malheiros. 2022. A Systematic Literature Review on Virtual Machine Consolidation. ACM Comput.
Surv., vol. 54, no. 8, (2022). DOI: 10.1145/3470972.
[126] Runqun Xiong, Xiuyang Li, Jiyuan Shi, Zhiang Wu, and Jiahui Jin. 2018. HirePool: Optimizing resource reuse based on a hybrid resource
pool in the cloud. IEEE Access 6, (2018), 74376-74388. DOI:https://doi.org/10.1109/ACCESS.2018.2884028
[127] Forbes. October 2022. Retrieved from https://www.forbes.com/sites/forbestechcouncil/2021/05/03/renewable-energy-alone-cant-
address-data-centers-adverse-environmental-impact/?sh=729ab68e5ddc
[128] Zhiming He, Yin Zhang, Byungchul Tak, and Limei Peng. 2020. Green Fog Planning for Optimal Internet-of-Thing Task Scheduling.
IEEE Access 8, (2020), 1224-1234. DOI:https://doi.org/10.1109/ACCESS.2019.2961952
[129] Ehsan Ataie, Reza Entezari-Maleki, Sayed Ehsan Etesami, Bernhard Egger, Danilo Ardagna, and Ali Movaghar. 2018. Power-aware
performance analysis of self-adaptive resource management in IaaS clouds. Futur. Gener. Comput. Syst. 86, 2018 (2018), 134-144.

ACM Comput. Surv.


34 • Rathinaraja et al.

DOI:https://doi.org/10.1016/j.future.2018.02.042
[130] M. Abbasi, E. Mohammadi-Pasand, and M. R. Khosravi. 2021. Intelligent workload allocation in IoT-Fog-cloud architecture towards
mobile edge computing. Comput. Commun. 169, (2021), 71-80. DOI:https://doi.org/10.1016/j.comcom.2021.01.022
[131] Wenyu Zhang, Zhenjiang Zhang, Sherali Zeadally, Han Chieh Chao, and Victor C.M. Leung. 2020. Energy-eicient workload allocation
and computation resource coniguration in distributed cloud/edge computing systems with stochastic workloads. IEEE J. Sel. Areas
Commun. 38, 6 (2020), 1118-1132. DOI:https://doi.org/10.1109/JSAC.2020.2986614
[132] P. P. Ray. 2018. A survey on Internet of Things architectures. J. King Saud Univ. - Comput. Inf. Sci. 30, 3 (2018), 291-319.
DOI:https://doi.org/10.1016/j.jksuci.2016.10.003
[133] Farooq Hoseiny, Sadoon Azizi, Mohammad Shojafar, and Rahim Tafazolli. 2021. Joint QoS-Aware and Cost-eicient Task Scheduling for
Fog-cloud Resources in a Volunteer Computing System. ACM Trans. Internet Technol. 21, 4 (2021). DOI:https://doi.org/10.1145/3418501
[134] Xiaoli Wang and Bharadwaj Veeravalli. 2017. Performance Characterization on Handling Large-Scale Partitionable Work-
loads on Heterogeneous Networked Compute Platforms. IEEE Trans. Parallel Distrib. Syst. 28, 10 (2017), 2925-2938.
DOI:https://doi.org/10.1109/TPDS.2017.2693149
[135] Chinmaya Kumar Swain and Aryabartta Sahu. 2021. Interference Aware Workload Scheduling for Latency Sensitive Tasks in Cloud
Environment. Computing (2021). DOI:https://doi.org/10.1007/s00607-021-01014-9
[136] R. Jeyaraj and A. Paul. 2022. Optimizing MapReduce Task Scheduling on Virtualized Heterogeneous Environments Using Ant Colony
Optimization. IEEE Access, vol. 10, (2022), 55842-55855. DOI: 10.1109/access.2022.3176729.
[137] Redowan Mahmud and Rajkumar Buyya. 2019. Modeling and simulation of fog and edge computing environments using ifogsim
toolkit. Fog Edge Comput. Princ. Paradig. (2019), 433-165. DOI:https://doi.org/10.1002/9781119525080.ch17
[138] Timothy Wood, Ludmila Cherkasova, Kivanc Ozonat, and Prashant Shenoy. 2008. Proiling and modeling resource usage of virtualized
applications. Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics) 5346 LNCS, (2008), 366-387.
DOI:https://doi.org/10.1007/978-3-540-89856-6_19
[139] Francisco Airton Silva, Iure Fé, and Glauber Gonçalves. 2021. Stochastic models for performance and cost analysis of a hybrid cloud
and fog architecture. J. Supercomput. 77, 2 (2021), 1537-1561. DOI:https://doi.org/10.1007/s11227-020-03310-1
[140] Ilyas Bambrik. 2020. A Survey on Cloud Computing Simulation and Modeling. Springer Singapore. DOI:https://doi.org/10.1007/s42979-
020-00273-1
[141] N. Mansouri, R. Ghafari, and B. Mohammad Hasani Zade. 2020. Cloud computing simulators: A comprehensive review. Simul. Model.
Pract. Theory 104, (2020). DOI:https://doi.org/10.1016/j.simpat.2020.102144
[142] Tariq Qayyum, Asad Waqar Malik, Muazzam A.Khan Khattak, Osman Khalid, and Samee U. Khan. 2018. FogNet-
Sim++: A Toolkit for Modeling and Simulation of Distributed Fog Environment. IEEE Access 6, (2018), 63570-63583.
DOI:https://doi.org/10.1109/ACCESS.2018.2877696
[143] Harshit Gupta, Amir Vahid Dastjerdi, Soumya K. Ghosh, and Rajkumar Buyya. 2017. iFogSim: A toolkit for modeling and simulation of
resource management techniques in the Internet of Things, Edge and Fog computing environments. Softw. - Pract. Exp. 47, 9 (2017),
1275-1296. DOI:https://doi.org/10.1002/spe.2509

Appendix A LIST OF ABBREVIATIONS

ACM Comput. Surv.


Resource Management in Cloud and Cloud-Influenced Technologies for Internet of Things Applications • 35

AaaS Application-as-a-service
CDC Cloud Data Center
CPU Central Processing Unit
CSP Cloud Service Provider
DAG Directed Acyclic Graph
FaaS Function-as-a-service
GPU Graphics Processing Unit
HDD Hard Disk Drive
IaaS Infrastructure-as-a-Service
I/O Input/Output
IoT Internet of Things
PM Physical Machine
QoS Quality of Services
RQ Research Question
SB Service Brokers
SDN Software-Deined Networks
SLA Software-Level Agreement
UAV Unmanned Aerial Vehicle
vCPU virtual CPU
VM Virtual Machines

ACM Comput. Surv.

You might also like