0% found this document useful (0 votes)
7 views

Parked Vehicles Task Offloading in Edge Computing

IEEE Access 2022

Uploaded by

khoantd
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views

Parked Vehicles Task Offloading in Edge Computing

IEEE Access 2022

Uploaded by

khoantd
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Received March 21, 2022, accepted April 12, 2022.

Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.
Digital Object Identifier 10.1109/ACCESS.2022.3167641

Parked Vehicles Task Offloading


in Edge Computing
KHOA NGUYEN 1 , STEVE DREW2 , (Member, IEEE),
CHANGCHENG HUANG 1 , (Senior Member, IEEE), AND JIAYU ZHOU 3, (Member, IEEE)
1 Department of Systems and Computer Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada
2 Department of Electrical and Software Engineering, University of Calgary, Calgary, AB T2N 1N4, Canada
3 Department of Computer Science and Engineering, Michigan State University, East Lansing, MI 48824, USA

Corresponding author: Khoa Nguyen ([email protected])


This research was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC) Engage under
Grant EGP543449-19, the National Science Foundation IIS-1749940, the Office of Naval Research N00014-20-1-2382,
and University of Calgary Start-up Funding 10032260.

ABSTRACT The analytical research has recently indicated that the computational resources of Connected
Autonomous Vehicles (CAVs) have been wasted since almost all vehicles spend over 95% of their time
in parking lots. This paper presents a collaborative computing framework to efficiently offload online
computational tasks to parked vehicles (PVs) during peak business hours. To maintain the service continuity,
we advocate for integrating Kubernetes-based container orchestration to leverage its advanced features (e.g.,
auto-healing, load balancing, and security). We analytically formulate the task-offloading problem and then
propose an intelligent meta-heuristic algorithm to dynamically deal with online heterogeneous demands.
Additionally, we take a cumulative incentives model into account, where the PV owners are able to earn profit
by sharing their computation resources. We also compare our algorithm with several existent heuristics on
different sizes of the parking lot. Extensive simulation results show that our proposed computing framework
significantly increases the possibility of accepting the online tasks and improves average task offloading cost
by at least 40%. Besides, we quantify the PV availability by task acceptance ratios, which can be a critical
criterion for network planners to achieve desired network service goals.

INDEX TERMS Parked vehicles, cloud computing, edge computing, collaborative cloud-edge computing,
online task offloading, container orchestration, Kubernetes.

I. INTRODUCTION These facts obviously imply that the powerful on-board


In the last decade, we have experienced a rapid proliferation vehicular facility is unused for most of the time, providing
of vehicles worldwide, which is estimated to reach two billion an excellent opportunity for exploiting these neglected
by 2035 [1]. The majority of them would come equipped computing resources for ordinary network services, and
with powerful on-board hardware (e.g., sensors, general- potentially gaining profit by trading the idle computational
purposed CPU, GPU) to offer advanced key features such as power [4].
autopilot, driver-assistance, smart radars, enhanced sensing- The explosive growth of mobile data traffic, either
safety systems. Especially, the on-board equipment enabling latency-insensitive (e.g., health monitoring, location-based
future full self-driving capabilities could cost vehicle owners augmented reality games, vehicular sensing) or latency-
thousands of dollars. However, the resource utilization of sensitive (e.g., video surveillance, mobile gaming, autopilot)
these modern vehicles is extremely low: 70% of all vehicles tasks with heterogeneous demands [5], will pose a formidable
spend almost 95% of the time in parking lots, home garages, obstacle to the existing architecture during peak hours indeed.
and street parking as disclosed in [1], [2]. For example, When most services are deployed at the cloud-side, service
America’s average daily driving time was only 50.6 minutes vendors are barely in with an opportunity to negotiate
reported for Traffic Safety by AAA Foundation in 2016 [3]. the costs for offloading services which are most likely
provisioned by Service Providers (SPs) with a fixed amount
The associate editor coordinating the review of this manuscript and of service charges. Moreover, the major impediment of the
approving it for publication was Tariq Umer . core cloud is large propagation latency, so the advent of

This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
VOLUME 10, 2022 1
K. Nguyen et al.: Parked Vehicles Task Offloading in Edge Computing

offers resource isolation, allowing a PV to run multiple tasks


independently. However, it is a non-trivial task to indicate
in which several replicas of a given task are offloaded to
be processed in the collaborative computing architecture,
satisfying rigorous resource constraints while achieving
minimized offloading costs. For instance, all replicas of a
task can be embedded into different nodes (e.g., PVs, cloud,
edge server) or a single node (e.g., a PV). In the latter,
if this node suffers an abrupt failure (e.g., battery outage,
accidental mobility of vehicles), the containerized network
services operating on this node will experience a service
disruption. It is different from our previous work in [7] that
did not fully take where the task replicas could be placed
into account, meaning that the whole task replicas could
be allocated in the same worker node. This paper solves
such limitation of [7]. To maintain service continuity and
reliability, we consider an upper boundary on the proportional
number of task replicas running on a network node. We set
out this proportion no greater than 50%, which can be easily
adjusted by SPs based on their service strategies. It means
that a single worker node can handle a maximum of 50%
FIGURE 1. Kubernetes-enabled parked vehicle edge computing
architecture. proportion of task replicas. This setting is aimed at failover
negotiation and load balancing. In the first sense, it might
look simple, but the online task offloading problem at the
edge computing with proximity to end-users is indispensably edge itself is challenging with several constraints, and now
a sound solution for this problem. Indeed, the coexistence its complexity is increased considerably with this constraint.
of the cloud and edge computing paradigm is among the In our article, we propose EdgePV, a novel collaborative
most dominant task-offloading schemes in practice [6], and framework where PVs increase the computing capacity
incoming tasks, in reality, are not always latency-sensitive. of the existing Cloud and Edge infrastructure to manage
As such, tasks with sensitive-latency tolerance are most likely the online containerized tasks during peak business hours
to be processed at the edge, while those with insensitive at the network edge. A containerized task is abstracted
latency can be managed at both the cloud and edge network. as a set of replicas operating on several containers in a
However, networks can quickly become congested when containerization environment. Scheduling several replicas of
tasks dramatically increase in peak hours. We need an an online task in the collaborative framework efficiently
effective solution to solve this problem. while meeting rigid resource constraints (e.g., latency-
The idle computation resources of parked vehicles (PVs) sensitive) remains a critical challenge. We formulate the
could be an ideal candidate for multi-access computing, task offloading problem on the collaborative paradigm as
where the typical computational and storage services com- Binary Integer Programming (BIP), focusing on minimizing
monly handled by the core cloud can be moved to the network offloading cost while maximizing the cumulative rewards.
edge. Due to the emergence of PVs, the capacity at the We then propose a meta-heuristic Genetic Algorithm (GA)
network edge can now be extended. Despite this obvious to deal with time complexity and scalability problems of
advantage, a collaboration among cloud, edge, and PVs BIP regarding low offloading costs and guaranteed reliability.
would escalate the task offloading problems by efficiently In addition, PV owners who are willing to share their idle
allocating proper network resources to arrived online tasks. resources can accumulate incentives, which can be converted
In addition to this, the parking duration of PVs is inconstant, into membership, gift cards, promotion vouchers, parking
making the potential PVs unreliable to host applications or tickets, gas credits, and so on.
services. Therefore, a novel network architecture is designed Our proposed solution studies the feasibility of integrating
to resolve those aforesaid problems. the core cloud computing, the edge computing, and PVs in
In fact, deploying a generic container orchestration to a unified computing infrastructure. As illustrated in Fig. 1,
edge computing assisted by PVs has been hitherto in we suggest the Kubernetes master node be placed at the edge
infancy. The de facto industrial standard framework for server. In contrast, core cloud, available computing resources
container orchestration such as Kubernetes allows PVs to of edge server, and all PVs can be considered as worker nodes.
simultaneously and efficiently handle several task replicas, In this paradigm, PVs are installed a lightweight version
enabling quick boot-up, autoscaling, self-healing features, of Kubernetes (e.g., K3S [8]), implemented as preamble
and rapid termination. The appealing features can be capable network nodes due to their uncertain parking duration. All
of addressing the uncertain parking duration of PVs. It also network components can automate the container deployment

2 VOLUME 10, 2022


K. Nguyen et al.: Parked Vehicles Task Offloading in Edge Computing

to handle online task requests. Thanks to the high availability The contents of our paper are divided into the following
of cloud and edge nodes, when a task arrives at the edge sections. Section II presents the related work while the
server, all the replicas of a task can be scheduled at the formulated offloading problem is introduced in Section III.
same cloud or edge node or both. They can also be allocated Section IV proposes the GA algorithm based on the
on distributed PVs to exploit the idle resources of PVs to problem formulation. Thereupon, the simulation evaluation
save the network cost, reducing heavy workloads in the core is demonstrated in Section VI. Finally, Section VII concludes
networks during peak hours. All master nodes and their our work.
worker nodes form a container orchestration cluster where the
control plane in the master node is responsible for managing II. RELATED WORK
worker nodes and pods in the cluster, monitoring the state of PVs as infrastructure has currently attracted a lot of research
the cluster, and making global decisions towards the cluster attention as they enable the existing computation paradigm
such as scheduling, scaling. The master node’s scheduler for computation, communication, and storage (CCS) to be
carries out pod placements on a set of available worker expanded. Deploying PVs as vehicular cloud computing
nodes. When an online task with rigid constraints (e.g., CPU, on the Internet of Vehicles has been well investigated
BW, latency tolerance, replicas) arrives in the master node, in [10]–[16].
a kube-scheduler in the control plane of the master node Arif et al. in [10] presented a basic model of a vehicular
will create pods and then assign the worker nodes for them cloud (VC) assisted by PVs in a specific international airport.
to run on. Interested readers can find more details about In contrast, He et al. in [11] proposed a multilayered vehicular
Kubernetes in [9]. Our proposed collaborative paradigm cloud infrastructure that was relied upon the cloud computing
strives for improve the elasticity and agility of the existing and Internet of Thing (IoT) technologies. The paper was
computing infrastructure to minimize the service disruptions based on the prediction of the parking occupancy in order
caused by the unforeseen mobility of PVs. In fact, all PVs to schedule the network resources, and eventually allocated
would be completely electric-based, which could be enabled the computational tasks. The smart parking and vehicular
the automatically-charging feature during their parking in data-mining services in IoT environment, were investigated.
near future. Additionally, this paper considers a generic Likewise, establishing a VC built-in PVs as spatial-temporal
scenario in which one base station covers all PVs within its network infrastructure for CCS in a parking lot was studied
coverage of a parking lot. Thus, the control signalling (e.g., in [12]–[14], [17]. Li et al. in [15] paid attention to
MCS, resource management, QoS, etc.) between BS and PVs the PVs feasibility as a computing framework, and then
over radio interface is neglected for simplification. presented an incentive mechanism considering accumulated
Our contributions are summarized below: rewards of PVs when they were trading their resources.
• We propose a collaborative computation paradigm Furthermore, Hou et al. in [18] introduced a vehicular fog
integrating the core cloud, edge, and PVs into a computing (VFC) paradigm exploiting the connected PVs as
consolidated architecture to address the online task the infrastructure at the edge to process real time network
offloading problem in peak hours. services. In a similar approach, the authors in [19] considered
• A container orchestration framework relied upon the a fog computing infrastructure implemented on the Internet
Kubernetes platform is leveraged to deal with the of Vehicle (IoV) systems to offer the computing resources
uncertain parking duration of PVs. Kubernetes platform to end-users concerning latency constraint. This proposed
provides non-disruptive services and minimizes the architecture allowed the network traffic to be offloaded in real
possible service interruptions due to an advanced self- time to the fog-based IoV systems subject to optimizing the
healing feature. When a worker node is out-of-service, average response time.
Kubernetes reallocates the corresponding task replicas Moreover, Parked Vehicle Edge Computing (PVEC),
into another active node automatically. in which PVs were recognized as accessible edge computa-
• We also formulate the online task offloading model as tion nodes to address the task allocation problem, has been
a BIP problem to minimize the offloading cost whilst researched in [1], [3], [20]. Huang et al. in [1] exploited
optimizing cumulative rewards of PVs by selling their the possible opportunistic resources for allocating the com-
idle resources considering the replicas as a constraint. putational tasks in a collaborative architecture consisting of
• A meta-heuristic algorithm that relied on a Genetic vehicle edge computing (VEC) server and PVs. The authors
Algorithm, namely EdgeGA, is proposed to deal with addressed the optimization problem of user payments by
the time complexity as well as scalability problems of relaxing budget or latency constraints, which resulted in
BIP. We simulate the task-offloading model in respect suboptimal solutions for the proposed scheme. In similarity,
to the random mobility behaviors of PVs under dynamic the paper [3] suggested a dynamic pricing approach to reduce
and arbitrary task arrivals. EdgeGA is compared with the average cost whilst satisfying QoS constraints. This
existing heuristics to prove its efficiency on different strategy calibrate the price constantly following the current
generic parking lots in performance simulation. We also system state. Besides, a containerized task scheduling scheme
propose a distributed parallel scheme for running the assisted by PVEC was presented in [20] considering the
EdgeGA algorithm to reduce the execution time. formulated social welfare optimization for users and PVs at

VOLUME 10, 2022 3


K. Nguyen et al.: Parked Vehicles Task Offloading in Edge Computing

the same time. Raza et al. in [16] studied a vehicle-assisted TABLE 1. List of acronyms and notations.
MEC architecture combining the core cloud, MEC, and
mobile volunteering vehicles (e.g., buses) to deal with IoT
devices’ task requests. The paper [16], at the first look,
is analogous to the idea of our paper. Nevertheless, this
research mainly targets the online task offloading problems
in a container-based computation paradigm regarding the
allocation problem for the set of online task replicas of
a given VNR. Our solution takes both average network
cost and accumulating incentives gained by selling idle
computation resources of PVs into account. Moreover, PVs in
the EdgePV framework would be more common and reliable
than buses, thanks to their popularity and less mobility. Our
previous work in [7] addressed the online task offloading
problem in a collaborative architecture, but [7] allowed to
allocate all task replicas in to the same node, exposing to
another fatal problem that the vehicle that is hosting all
replicas of a task suddenly leaves from the parking lot, the
service provisioning will be interrupted. Additionally, [7]
proposed a simple heuristic algorithm, named M&M, which
ranked the worker nodes based on the offloading cost and
revenue. The transmission data rate was randomly assigned
instead of being dependent on the distance between PVs
and BS. [7] only considered a medium size of the parking
lot.
This paper is an extension of [21] where we expand the
related work by analyzing more relevant papers with their
specific strengths and limitations. To provide the illustration
of our solutions, we depict an example of a simple task
offloading scenario and then provide pseudocode for all algo-
rithms. Furthermore, we extend to double the size of a generic
parking lot in [21], compare the offloading performance
on different sizes and then quantify them accordingly. This
makes our proposed solution more comprehensive. We also
carry out a further analysis on the achieved simulation
results, and make an execution-time comparison of EgdePV
algorithm adopted in both sequential and parallel manners
to demonstrate the efficiency of the proposed distributed
parallel deployment.

III. PROBLEM FORMULATION


In this paper, we formulate the task offloading problem
considering multiple resource constraints at the network
edge, where the scheduler of the container orchestration is
located in the edge server deployed in a 5G base station (BS).

A. OFFLOADING MODEL
We investigate CPU, memory, and bandwidth resources in the network under a containerized cluster in this paper can be
online task offloading problems. There are various types of quickly realized as a star topology in which the root and
worker nodes, comprising the core cloud, edge server, and its leaves are the master node and several worker nodes,
PVs, where they are connected to a master node located in respectively. Fig. 1 illustrates a generic outdoor parking lot.
the edge server through separate connections. For instance, PVs need to register all vehicular information such as owner’s
the link between the master node and the core cloud is optical, ID, available parking time, license plate, available resources
whereas PVs connect to the edge server through wireless links (e.g., computing capacity, storage) with their corresponding
in which the available bandwidth are primarily dependent SPs. Then, they could keep their current vehicular status
on the distances between PVs and BS. Therefore, the edge updated by sending these information to their corresponding

4 VOLUME 10, 2022


K. Nguyen et al.: Parked Vehicles Task Offloading in Edge Computing

B. SYSTEM MODEL
In this paper, it is assumed that the latency is inversely
proportional to the remaining capacity as the consequence of
the M/M/1 queuing model.

1) CORE NETWORK OFFLOADING LATENCY


Network latency can be associated with data transmission
and task execution time. The formal is highly correlated
with the residual bandwidth of the link lc , whereas the latter
is based upon the availability of the core cloud to process
the offloaded tasks and/or other network services. Hence,
more tasks allocated to the remote cloud through lc with less
FIGURE 2. An example of task scheduling. residual resources can dramatically raise the network latency,
intensely in peak hours. In practise, the latency involved
master node as well as the registered SPs whenever PVs either in accessing or writing on data volumes from/to memory
arrive at a parking lot or accomplish processing the allocated can be overlooked thanks to the advanced technologies in
tasks. datacenters. The offloading latency tc (lc ) to the cloud is
As a result, the edge network can be modeled as a directed consisted of the transmission, and the processing latency that
graph G = (N , L) in which N is the set of worker nodes can be calculated as follows:
while L is the set of links. For instance, Fig. 2 depicts χkj χkj fkj d
tc (k) = max { + c + + Th } ≤ tm (k) (3)
different tasks with a set of required resources such as j≤|ð(k)| ξc RC (ni ) v
CPU, memory, bandwidth, the number of replicas, task
where ξc and fk respectively denote the transmission rate of
duration, and eventually latency requirements. Accordingly,
the server and CPU cycles utilizing for computing data per
task 1 associated with replicas could be allocated to any
bit.
types of network nodes (e.g., core cloud, edge server, or PVs)
Consequently, the total amount of CPUs required for
due to the insensitive latency demand. However, task 2 with
serving the task k are described as χk .fk . d, v, and Th denote
sensitive latency demand cannot be offloaded to the core
the cloud-to-edge distance, the light speed, and the constant
cloud, even it acquires less computing resources. In case
time of processing a given task, respectively. Online tasks
PVs are selected for offloading, a PV is merely permitted
arrive at the edge through wired or wireless links (BS) in
to process maximum three replicas to guarantee the service
which the master node and edge server are resided. The
reliability. In addition, the edge server and the core cloud are
management and control of the offloaded tasks at the edge
connected through an optical connection, whereas PVs link
can be indeed concerned as a local process; thus, the latency
to their network edge through wireless connections, denoted
is primarily associated with the remaining computation
as lc and lv respectively. In fact, a single worker node is able
capacity to host the task. Latency produced by processing
to initialize several pods to handle multiple containers of the χk
corresponding task replicas simultaneously. An online task k a task is described as Th = , in which the discount
Be ν
in our model demands CPU c(k), Memory m(k), Bandwidth factor is ν that reflects the bandwidth fluctuations at the edge
b(k), tolerable latency tm (k) and a number of replicas ð(k) (0 < ν < 1). Then, the offloading latency te (k) at the edge is
requirements.
P kj represents the jth replica of the task k ∈ K , calculated as follows:
and kj = ð(k). Additionally, a worker node ni ∈ N
χkj fkj
possesses a resource capacity to serve a limited number te (k) = max { e + Th } ≤ tm (k) (4)
of containers. An ith worker node has CPU and memory j∈ð(k) RC (ni )
capacity, denoted as C(ni ) and M (ni ), respectively. Besides,
2) PVs LATENCY
Kc , Ke , Kp and K are sets of tasks that are successfully
offloaded to the cloud, edge, PVs and the whole network As opposed to the stable cloud or edge node, PVs can be
respectively, so K = Kc + Ke + Kp . Consequently, the defined as preemptible worker nodes for the sake of the
residual CPU and memory capacity of each worker node can erratic mobility. Thus, containerized task replicas would be
be calculated as follows: a potential solution to diminish any service disruptions. It is
X X assumed that the master node has enough time to reallocate
RuC (ni ) = C(ni ) − c(kj ), ∀ni ∈ N (1) the currently affected task replicas to other available nodes
k∈K j∈ð(k) to maintain the QoS. In general, task replicas are allowed
X X to reliably operate under a load-balancing mode in a normal
RuM (ni ) = M (ni ) − m(kj ), ∀ni ∈ N (2)
state to enhance availability, performance efficiency, and
k∈K j∈ð(k)
reliability.
where u represents the worker nodes (Cloud: c, Edge: e, PVs: In this research, we basically leverage the LTE-A wireless
p ∈ P), so |N | = 2 + |P|. connections between BS and PVs in respect to orthogonal

VOLUME 10, 2022 5


K. Nguyen et al.: Parked Vehicles Task Offloading in Edge Computing

WM m(kj )
frequency-division multiple access (OFDMA) scheme as 4Me (kj ) = X eX (12)
similar to [16]. dbs,p denotes the geographical distance Me − m(kj0 ) + δ
between the BS and the pth PV. The path loss between them is k 0 ∈Ke j≤|ð(k 0 )|
−σ
defined by dbs,p and white Gaussian noise power N0 , in which Total offloading cost of a task replica at the edge is:
σ factor expresses the path loss exponent. Hence, the wireless
channel can be modeled as a frequency-flat block-fading 4ekj = 4Ce (kj ) + 4Me (kj ) (13)
Rayleigh fading channel, denoted as h. As a result, the data Similarly, the cost of offloading a task replica and the energy
rate of pth PV is calculated as: consumption at a parked vehicle is formulated as follows:
PTX .dbs,p
−σ
.|h2 | WCp χkj fkj
ξp = Bp log2 (1 + ) (5) 4Cp (kj ) = (14)
N0 + I Cp −
X X
c(kj0 ) + δ
where Bp , PTX and I are the channel bandwidth, transmission k 0 ∈Kp j≤|ð(k 0 )|
power of BS, and inter-cell interference, respectively. The WMp m(kj )
offloading latency of PVs tp (k) is then formulated as follows: 4Mp (kj ) = X X (15)
Mp − m(kj0 ) + δ
χkj χkj fkj
tp (k) = max { + p + Th } ≤ tm (k) (6) k 0 ∈Kp j≤|ð(k 0 )|
j∈ð(k) E[ξp ] RC (ni ) χkj
We also investigate the offloading efficiency of two 4Bp (kj ) = WBp , ∀k ∈ Kp (16)
tm (kj )ξp ;
types of online containerized tasks, comprising the Ep (kj ) = χkj fkj ep (17)
latency-sensitive and latency-insensitive akin to [5]. While
the latency-sensitive tasks are merely allocated to the edge where ep is a coefficient, which is attained by:
nodes (e.g., edge server, PVs) due to their closest proximity, p
ep = (RC (ni ))2 (18)
but the latency-insensitive tasks are processed at any worker
nodes such as the remote cloud, edge server, or PVs. Then, where  denotes an energy coefficient.
we compute the cost for offloading an online task replica in Hereafter, total offloading cost of each task replica kj at a
our proposed collaborative computing architecture, which is parked vehicle is:
associated with the sum of total CPUs, memory, bandwidth, p
4kj = 4Cp (kj ) + 4Mp (kj ) + 4Bp (kj ) + ς Ep (kj ); (19)
and energy consumption for processing task replicas at PVs.
In fact, the remote cloud and edge server achieve a high where ς is an energy cost coefficient.
energy efficiency, so we do not consider this attribute in their
costs. The offloading cost at the core cloud is defined as 3) PVs’ UTILITY
follows: Owners of PVs are encouraged to share the idle computa-
WCc χkj fkj tional resources while parking in parking lots, so they can
4Cc (kj ) = X X (7) gain accumulative rewards by hosting the task replicas in their
Cc − c(kj0 ) + δ
vehicles. ϕ p is the rewards by processing a task replica at a
k 0 ∈Kc j≤|ð(k 0 )|
parked vehicle p. Thus, the corresponding utility is defined as
WM m(kj )
4Mc (kj ) = X cX (8) follows:
Mc − m(kj0 ) + δ
$ p = ϕ p − ρEp (kj ) (20)
k 0 ∈Kc j≤|ð(k 0 )|
χ (k )
WBc tm (kjj ) where ρ denotes a coefficient of energy price, and ϕ p is
4Bc (kj ) = (9) presented as:
X X χ (kj0 )
Bc − +δ ϕ p = µrpc χkj fkj + rpm m(kj ) (21)
tm (kj0 )
k 0 ∈Kc j≤|ð(k 0 )|
where rpc and rpm are the unit prices of CPU and memory,
where δ is a small positive number to prevent dividing by respectively. We can see that minimizing the offloading cost
zero. The offloading cost for processing a task replica at the can directly maximize the profits gained by hosting the
core cloud is: according task replicas at PVs.
4ckj = 4Cc (kj ) + 4Mc (kj ) + 4Bc (kj ) (10) Variables:
(
As we discussed earlier, when a task is handled at the 1, kj deployed on cloud, ∀j ≤ |ð(k)|.
Ackj =
edge server, this is widely recognized as a local processing. 0, otherwise.
Therefore, the offloading cost at the edge server is computed
(
1, kj deployed on edge, ∀j ≤ |ð(k)|.
as follows: Aekj =
WCe χkj fkj 0, otherwise.
4Ce (kj ) = (11) (
1, kj deployed on a PV, ∀j ≤ |ð(k)|.
X X
Ce − c(kj0 ) + δ p
Akj =
k 0 ∈Ke j≤|ð(k 0 )| 0, otherwise.

6 VOLUME 10, 2022


K. Nguyen et al.: Parked Vehicles Task Offloading in Edge Computing

Objective: constraints is not an easy task, meta-heuristics have been


X p successfully employed in various applications from different
Minimize 4ckj Ackj + 4ekj Aekj + (η4kj fields, including operation research, industrial engineering to
j≤|ðk| management science. Evolutionary Computation (EC) that
1 p imitates the natural selection evolutionary concepts includes
+ (1 − η) )Akj
$ p meta-heuristic algorithms to search for globally optimal
p
w.r.t Ackj , Aekj , Akj (22) solutions. EC techniques provide flexibility, adaptability, and
importantly an exceptional performance. They are effective
Constraints: when the search space is huge with a large number of involved
p
X
Ackj + Aekj + Akj = 1, j ≤ |ð(k)| (23) parameters.
p∈N In fact, a mature meta-heuristic GA algorithm is inspired
X p by the Darwin’s theory of evolution principle through the
1≤ Akj ≤ α ∗ |ð(k)| (24) natural selection, has been the most common population-
j≤|ð(k)|
based meta-heuristic algorithms. GA is able to solve either
c|e|p
X
Akj ∗ c(kj ) ≤ Cc|e|p (25) linear or non-linear programming optimization problems
j≤|ð(k)| with multiple objectives. Thanks to the simpleness and
X c|e|p
Akj ∗ m(kj ) ≤ Mc|e|p (26) straightforward deployments, GA is fast. It has been proved
to be more efficient than many conventional heuristic
j≤|ð(k)|
X algorithms by efficiently maintaining a balance between
Ackj ∗ b(kj ) ≤ Bc (27) exploration and exploitation through a proper set of param-
j≤|ð(k)| eters. GA is recently recognized as a scalable alternative
b(kj ) ≤ ξni , ∀ni ∈ N (28) to reinforcement learning (RL) algorithm [23], [24] over
c|e|p
X
Akj ∗ tc|e|p ≤ tm (k) (29) competitive performance outcomes. Although GA cannot
j≤|ð(k)| always perform better than RL as shown in [25] and [26],
GA algorithm is indeed faster than the counterpart since
Remarks: GA exposes its greater scalability as well as parallelism.
• Function (22) focuses on dual optimization objectives: Furthermore, GA is practically acknowledged as a parallel
optimizing both the offloading cost as well as PVs’ search [27] with mutual independency amongst multiple
rewards on which the task replicas are offloaded to PVs exclusively feasible solutions.
where η denotes a damping factor within (0,1). GA can produce a set of ‘‘good’’ solutions in lieu of a single
• Constraint (23) makes sure that each task replica is solution. Those are able to be evolved over several iterations,
merely processed at a single worker node. driven by an efficient fitness function. A typical Genetic
• Constraint (24) determines that the proportion of task Algorithm comprises four major operations: population
replicas offloaded to a PV cannot exceed 50% due to initialization, selection, crossover, and mutation [28]. At each
their uncertain mobility. iteration, GA chooses two individuals randomly in the
• Constraints (25),(26), (27), and (28) guarantee that the generated population as parents to create their children (also
remaining capacity of the worker nodes (e.g., Cloud, known as offspring) for the next generation. As the nature
Edge, PVs) must satisfy the rigid task demands. of the selection process, if good parents are selected for
• Constraint (29) eventually guarantees the chosen worker generating new generations, their offspring is most likely to
nodes must meet the latency constraint. be good through good characteristics inherited from their
parents. This somehow guarantees good solutions produced.
IV. OUR PROPOSED GENETIC ALGORITHM Over generated generations, the population can eventually
A. BACKGROUND OF METAHEURISTIC ALGORITHMS get evolved, so that the chance of approaching an optimal
An optimization process is technically a kind of process that solution is remarkably increased. In detail, GA is first gen-
approaches better and better solutions by searching and com- erating an initial population randomly. Each feasible solution
paring feasible ones until it cannot achieve a better result [22]. in the population, widely acknowledged as a chromosome,
Nevertheless, the major optimization aim is to come up with will quantify its quality by the fitness function. Then, two
an optimal solution meeting a set of predefined objectives, chromosomes are deterministically or randomly chosen as
and is expected to conciliate multiple stringent constraints. parental individuals in the selection operator. Accordingly,
Meta-heuristics include a set of optimization techniques that these chromosomes enable the production of their offspring
efficiently discover feasible solutions, aiming at achieving by interchanging the partial genes at a random point, widely
the global optimum. Intrinsically, meta-heuristic algorithms known as crossover operation. The next operation is called
carry out diversified variation operations to explore new mutation that applies a small random tweak to a chromosome,
potential solutions effectively, and their multi-objective fit- deployed on a randomly selected chromosome with a random
ness function will drive such potentials to the optimum. Even position to produce a new solution. Moreover, the mutation
designing an efficient algorithm satisfying several desired is expected to consider an exploration on the searching space

VOLUME 10, 2022 7


K. Nguyen et al.: Parked Vehicles Task Offloading in Edge Computing

j c,f e,f 1,f |P|,f


gf = Akj Akj Akj · · · Akj . If G is the number of genes,
so G = |ð(k)|. The evolutionary process is started with
M chromosomes, so that the initial population is created as
follows:
  g1 j
· · · gG


C1 1 ··· g1 1
j
 C2   g12 ··· g2 · · · gG 
2 
. . .
 
 ..   .. ..

. .. .. 

 .    . . .
P=  =  g1 (30)

j
 
G
 Cf   f · · · g f · · · g f 

 ..   .

.. .. .. .. 
 .   . . .
. . . 
CM 1 j
gM · · · gM · · · gG M
In fact, the chromosome, formed by G genes passing a
feasibility check, is established as a feasible offloading
solution for a given task with a set of requested replicas.

b: SELECTION
The selection operation essentially determines which chro-
mosomes to become parents for the crossover operator.
To enhance the parallelism, parents can be randomly selected
FIGURE 3. Distributed and parallel GA-based implementation. from the initial population with a replacement. Due to
the nature of randomness, the quality of children, that are
while retaining the population diversity. GA operations can generated in the crossover operator, can be better or even
be rerun until the pre-defined stopping condition is met (e.g., worse than their parents. In theory, there are several selection
several iterations). Lately, parallel computing is a promising strategies, but the fitness-based proportionate designation
paradigm to efficiently deal with the complicated problems relied upon the accumulative sum of fitness-relative weights
with huge time-saving and lower cost guarantees by enabling is usually preferable in this operator.
concurrency.
c: FITNESS FUNCTION
B. DISTRIBUTED PARALLEL GENETIC ALGORITHM The major goals of our proposed algorithm include optimiz-
For solving the BIP problem, a distributed and paral- ing the cost of offloading online tasks and maximizing the
lel GA-based algorithm running on multiple independent user rewards when the task is offloaded to PVs. To achieve
machines, denoted as V is proposed to discover the search these dual objectives, fitness function is utilized to evaluate
space in this paper. The operational implementation of our the quality of an offloading solution, and a better solution
proposed algorithm is demonstrated in Fig 3 where |V | is could produce higher fitness values in this paper.
defined to be 16. The working scheme includes a master X 1 1 e 1 p p
node that primarily plays a synchronization role, and several F(k) = Ac + A + ((1 − η) p + ηϕk )Akj
4ck kj 4ek kj 4 k
distributed slave nodes exploring as many feasible solutions j∈ðk
as they can. At each slave node, GA iteratively deploys its (31)
few operators to seek for the feasible offloading solution.
The best outcomes based on the fitness values among the 2) THE PROCESSES OF EVOLUTION
distributed parallel nodes are then synchronized by the master After initial population is generated in the initialization
node to identify the optimal offloading solution for the given operator, two chromosomes are selected in random to be
task. Our proposed algorithm in this research is permitted to parents. Then, new generations are formed by an evolutionary
offload multiple task replicas at once rather than allocating process including the crossover and mutation operators.
each replica sequentially. To maintain the population diversity, the newly generated
generations are updated into the existing population. This
1) GENETIC REPRESENTATION AND SELECTION strategy is able to improve the opportunity to obtain
a: CHROMOSOME near-optimal task offloading solutions.
GA’s a chromosome denoted as Cf in this paper indicates
a solution for the whole set of requested replicas of a a: CROSSOVER
given task, which is randomly chosen from the available This is considered as the most vital operator to create new
worker nodes meeting task resource demands. Hence, offspring by stitching up the parental chromosomes in GA.
each gene within the chromosome represents an offloading Suppose Cs and Cr are two parental chromosomes that
solution for a single task replica, which is described as have particular indexes s and r in the initial population.

8 VOLUME 10, 2022


K. Nguyen et al.: Parked Vehicles Task Offloading in Edge Computing

Denote jc as a random crossover point within N length, Algorithm 1 EdgeGA - An Intelligent GA-Based Algorithm
their corresponding descendants are C(M +1) and C(M +2) . 1: Input:
By exchanging genes beginning from the crossover point 2: An online task k with five tuples
jc + 1 to the last gene between the parents, new generations {c(k), m(k), b(k), tm (k), ð(k)}
are generated as below: 3: Output:
 1 jc jc +1  4: A list of worker nodes hosting task replicas.
C1
  g1 · · · g1 g1 · · · gG
1 5: procedure task offloading
 . .. . .. ..
 ..   .. . .. . F Step 1: Generate a list ζk of node candidates

 .   . 
including cloud, edge, PVs

jc jc +1
 Cs   g1s · · · gs gs G
· · · gs 
   
6: function GET_CANDIDATES (k)
 ..   .. .. . .. ..
. ..
   
 .   . . . empty ζk

 7:
c c
P =  Cr  =  g1 j j +1  (32) for all ni ∈ N do
   
  r · · · gr gr · · · gG
r 
8:
 .   . .. . .. .. if RuC (ni ) ≥ c(k), RuM (ni ) ≥ m(k),

 ..   .. . .. .

   . 
 RuB (ni ) or ξp ≥ b(k) then
 CM   1 jc c
j +1
 g · · · gM gM G
· · · gM  add ni to ζk

9:
 CM +1   M1

j c c
j +1
· · · gG end for

 gs · · · gs gr 10:
r

CM +2 jc jc +1 11: return ζk
g1r · · · gr gs · · · gG
s
if ( none of worker nodes are available) then
b: MUTATION 12: reject the task k
This operator applies a small modification on the current 13: end function
parent to form new offspring/chromosome. The mutation FStep 2: Deploy Genetic Algorithm in a distributed
stage allows to sample the large search space, improving parallel operation scheme
the searching efficiency. This operation is widely known 14: call Algorithm 2
a primary component in the evolutionary process, which FStep 3: Synchronize all incumbents obtained in
prevents potential solutions from falling into the local optima. independent working machines
Technically, a new gene selected randomly replaces an 15: Choose the best solution relied upon the sum of
existing one within one of children produced in Crossover fitness values (E.q. 31)
operator to create a new offspring. The gene must inevitably 16: return the list of worker nodes ni ∈ N
satisfy the resource demands to survive through the feasibility F Step 4: Update SN resources
17: end procedure
check. If both children are infeasible in Crossover, one
of parents chosen in random is then used form mutation.
j
Suppose jm is a random mutation point and gr 0 is a new solutions obtained within t times. Eventually, the feasible
gene that substitutes the existing gene within C(M +1) . Conse- solutions found from several slave machines is finalized
quently, new offspring generated from the mutation stage is through a synchronization in order to choose the optimal
jm
C 0 (M +1) = [g1s · · · gr 0 · · · gG
s ]. offloading solution relied upon fitness values. If accepted,
To maintain a balance between exploitation and explo- task replicas of the given task are then allocated to the worker
ration in GA algorithm, crossover rate pc is typically set nodes following the information of the achieved offloading
higher than mutation rate pm . Determining pm is never solution. SN eventually updates the rnetwork resources to
an easy task since small mutation rate leads to premature finish the offloading processes.
convergence while high mutation rate could improve the The technical details of the proposed GA-based algorithm
exploration process in the search space, but this selection are provided in Algorithm 1 and 2. When an online task
might prevent GA algorithm from converging to optimal including a number of required replicas arrives, the algorithm
solution. By preferring high efficiency of GA while keeping creates a list of potential node candidates which must meet
a trade-off between exploitation and exploration, we set resource requirements of the given task demands (e.g.,
pc = 0.9 and pm = 0.2 in this paper. CPU, memory, bandwidth, delay) as shown in lines [6-13].
GA is then implemented in a single working machine in a
3) TERMINATIONS AND SYNCHRONIZATION distributed parallel operation scheme in order to seek the best
Parallel processing is associated with multiple concurrent offloading solution for a given task by calling Algorithm 2
processes in which each process might accomplish its in line 14. Lines [15-16] are the synchronization process that
assignment at a different time. Unfortunately, waiting for selects the optimal offloading solution among the outcomes
all tasks to completely finish their assigned jobs is painful of the parallel machines, and eventually updating the network
due to the fact that one or more tasks might take too much information status in Step 4. In terms of Algorithm 2,
time for processing (e.g., deadlock). Thus, to reduce the lines [4-13] are associated with population initialization
overall execution time, the master node will terminate GA where each chromosome is randomly generated from the list
algorithms running at worker nodes if there is no better of node candidates. By selecting parents from the population

VOLUME 10, 2022 9


K. Nguyen et al.: Parked Vehicles Task Offloading in Edge Computing

Algorithm 2 GA Runs at Each Paralleled Machine chromosome (e.g., due to network congestion), this will
1: Input: ζk become the final offloading solution in line 34; otherwise,
2: Output: The best offloading solution for the task k the task will be rejected in line 36.
3: procedure Genetic Algorithm operations
FInitial Population Generation C. EXECUTION TIME ANALYSIS
4: r =0
5: for m = 1 to M do Due to the lower cost of computing hardware recently,
F Generate a chromosome with |ð(k)| genes. Each parallel algorithms can be beneficially exploited to tackle
j intricate computational tasks. As a result, we advise a
gene is a task offloading solution for a replica gf =
c,f e,f 1,f |P|,f
Ak j Ak j A k j · · · Ak j distributed parallel GA framework to deal with the online
6: for n = 1 to |ð(k)| do task offloading problem. In this paper, the execution time
F Try to map a task replica to a randomly selected of the proposed task offloading solution is measured in two
worker node in ζk with up to Q trials manners: sequential and parallel modes. In sequential mode,
7: for q = 1 to Q do the time complexity follows a linear increase as we can see
8: Map a replica to a randomly selected that the execution time is the sum of the operation time
worker node in ζk
9: if feasible goto 11 of all working machines. However, the total execution time
10: end for of the parallel mode is estimated at the latest machine that
11: end for finishes its offloading assignment. The time complexity of
12: r = r + 1 and add the chromosome to population GA algorithm at each machine is roughly O(G × M ×
13: end for maxIterations). In fact, the representation of GA algorithm
FEvolution process
14: if r > 1 then at each working machine is not static, which is depended
15: for p = 1 to maxIterations do on the number of replicas of a given task. In addition,
16: if ranNum ∈ (0, 1) < pc then we cannot always guarantee to achieve M chromosomes when
17: Select two parents in random the SN becomes increasingly congested. In GA algorithm,
18: Conduct crossover operation the iteration process is terminated earlier if the best fitness
19: if both parents are feasible then
20: One of children is randomly chosen value does not change for t consecutive iterations. It would
for mutation be better to measure the time complexity by measuring the
21: r = r + 2 and add them to population average runtime and to indicate how the parallel manner is
22: else enhanced in a comparison with sequential one.
23: if only a child is feasible Similar to [29], we apply Cramer-Chernoff technique and
24: r = r + 1 and add the one to
population Jensen’s inequality to provide a reasonable approximation
25: if ranNum ∈ (0, 1) < pm then to the total execution time of the parallel mode. Hence, our
26: if both children in crossover are distributed parallel offloading framework is able to indeed
infeasible then enhance the time complexity from linear to logarithmic scale
27: One parent is randomly selected for subject to |V |. Interested readers may refer to [29] further
mutation
28: end if theoretical analysis.
29: Conduct Mutation operation
30: if new mutated child is feasible then V. COMPARED ALGORITHMS
31: r = r + 1 and add the one to We evaluate the efficiency of not only our proposed
population collaborative framework compared with conventional com-
32: end for
33: if r > M eliminate chromosomes produced lower puting paradigms including cloud and edge computing, but
fitness values also our GA-based algorithm in a comparison with some
34: else if r = 1 then the current offloading solution will heuristic algorithms, comprising Baseline_1, Baseline_2, and
be final. Baseline_3 towards the acceptance ratio, offloading cost,
35: else and utility. Baseline_1 is considered as a Kubernetes default
36: reject the task k
37: end if scheduler applying the filtering and then scoring algorithms,
38: end procedure whereas Baseline_2 processes the task replicas by randomly
selecting the worker nodes. In contrast, Baseline_3 deploys a
branch and bound strategy to tackle the given task with a set
in random as shown in line 17, we try to balance the of replicas sequentially [30]. Different from these heuristics,
exploration and exploitation. Lines [14-37] involve the GA’s our proposed GA-based solution enables a set of all task
evolution operations by exploring the searching space. The replicas to be processed at once. To remain the service
Crossover operator is conducted in lines [16-24], whereas stability and reliability, a proportional number of replicas can
lines [25-31] are the Mutation operator. Line 33 is to maintain be solely offloaded to a single worker node, which cannot
the elite population by eliminating the chromosomes produc- exceed 50% (except cloud and edge nodes). Indeed, SPs is
ing the lowest fitness values to remain the population at most able to easily adjust this parameter to meet their specific
M chromosomes. In case that we only achieve one feasible goals (e.g., in network congestion). Several performance

10 VOLUME 10, 2022


K. Nguyen et al.: Parked Vehicles Task Offloading in Edge Computing

FIGURE 4. (a) Acceptance ratio (b) Average costs between architectures.

FIGURE 5. (a) Average offloading cost (b) Average utility.

evaluation metrics including average task acceptance ratios,


average offloading costs, and average accumulated utility
are conducted to assess the efficiency amongst the com-
pared algorithms. Besides, we extend our assessment by
comparing our proposed GA-based algorithm on different
parking-lot sizes: 50 and 100 parking spots according to
small and medium ones. The offloading results of them
are crucial to determine the possible network strategies
towards SPs in order to guarantee QoS or Key Performance
Indicators (KPIs).

VI. NUMERICAL RESULTS


A. SIMULATION SETUP
In this paper, we have evaluated the algorithms by developing FIGURE 6. Acceptance ratio towards PV availability.

a discrete event simulator. Vehicles dynamically arrive at and


leave parking lots which have 50 or 100 free parking spots. [08-240] minutes. The service behaviours of PVs in [1] was
In practice the whole parking lot might be fully utilized, but estimated from the real dataset provided by ACT Government
we set out the capacity of the parking lot merely ranging Open Data Portal dataACT. The SmartParking application
from 50% up to 85% in peak hours. It can be argued that not was installed to collect more than 180, 000 parking records
all PVs are ready to share the computation resources while in the Manuka shopping precinct in Canberra, Australia.
parking or meet the essential qualifications to provision the Following these statistics, it is pointed out that more than
network services (e.g., outdated vehicles, lacking computing 85% of PVs approximately spent maximum average 3 hours
capability, running errands). In addition, [1] indicated in the parking lot, and the probability of serviceability of PVs
that the parking duration of PVs is analytically varying gains around 90% at 60 minutes [1]. In this research, the

VOLUME 10, 2022 11


K. Nguyen et al.: Parked Vehicles Task Offloading in Edge Computing

TABLE 2. Simulation parameter settings. in comparison with Cloud-Edge and Cloud infrastructures
at the arrival rate of 120, respectively. Additionally, Cloud
or edge infrastructure performed worse due to their limited
computation capacity in peak hours. Moreover, our proposed
collaborative framework significantly saved the offloading
cost when being compared to other infrastructures as
demonstrated in Fig. 4b. These results in Fig. 4a and 4b come
from the facts that the collaborative framework exploited not
only typical cloud and edge computing capacities, but also
those of available PVs, allowing more online task requests
processed. It also deployed the GA-based algorithm for
optimizing the offloading cost, so the proposed infrastructure
produced less cost compared to others. Similarly, cloud-Edge
enabled the computing resources of both cloud and edge
computing, which helps cloud-edge paradigm perform better
than separate cloud or edge paradigm which could only utilize
their own capacity separately. However, due to its merged
computing capacities, cloud-edge framework generated more
offloading cost than others except cloud infrastructure. The
core cloud indeed processed more tasks than the edge
computing due to its larger computation capacity, but it also
bore more cost than the edge.
In Fig. 5a, Baseline_1 performed worst in terms of the
average offloading cost because of its offloading strategies
with a simple heuristic filtering and scoring algorithm.
Baseline_1 first carried out the filtering procedure to select
the feasible nodes that met the task requirements, then
cored them based on their current properties (e.g, computing
parking duration of PVs follows the Poisson distribution with resources). The node with the highest scores that was matched
λ = 3600. The simulation runs for almost 8 hours following the task demands was selected. It did not take any offloading
the common pattern of business working time in peak hours, cost or utility factor into account. In contrast, Baseline_2
and the simulator indeed updates the PV availability for every was based upon the random mechanism for selecting the
20 minutes. As mentioned, the online tasks can be commonly worker nodes and, had a better performance than Baseline_1.
divided into latency-sensitive and latency-insensitive tasks; And, Baseline_2 tended to perform well when the network
thus, when the latency tolerance of a task exceeds 20 ms, was less congested; thus, it had more options for preference.
it is marked as a latency-sensitive request. In this paper, the Baseline_3 was primarily aimed at optimizing the offloading
offloading task requests arrive in the network following the cost, so it performed best amongst the baseline algorithms;
Poisson process with an average rate varying from 10 to and its performance was indeed very comparative to the
120 requests per 100 time units. Each online task request proposed GA-based algorithm.
has an exponentially distributed lifetime with an average Fig. 5a is revealed that EdgeGA’s performance was still
of µ = 1200 time units. These workloads are extremely better than Baseline_3 prior to the arrival rate 80, and
extensive for evaluating the proposed framework as well performed slightly similar afterwards. It is because the online
as the compared algorithms. Besides, energy coefficient , tasks were most likely offloaded to PVs, producing lower
coefficient for energy price ρ and unit price for each offloading cost. Towards utility as depiected in Fig. 5b,
CPU cycle σ are set to 10−24 , 0.003 and 2 × 10−9 [17], EdgeGA defeated the heuristics following Baseline_2, Base-
respectively. Other simulation parameters are detailed in line_3, and Baseline_1, respectively. In fact, EdgeGA took
Table 2. the offloading cost as well as the utility into account, driven
by the efficient fitness function (31), while the baseline
B. PERFORMANCE RESULTS algorithms did not consider utility in their node selection
In terms of the small-size parking lot, evaluation results are strategies. In Fig. 5b, Baseline_2 performed better than
shown in Fig. 4, 5 and 6, whereas those of medium-size EdgeGA before the arrival rate of 40 because Baseline_2
parking lot are illustrated in Fig. 7 and 8. Finally, Fig. 9 had more node options for offloading the tasks when the
depicts the execution time of the proposed GA-based network was less congested; however, starting from the
algorithm measured in different sizes of the parking lots. arrival rate of 40 afterwards, EdgeGA outperformed all
Fig. 4a indicates that our collaborative paradigm remarkably compared algorithms since EdgeGA smartly searched for the
enhanced the average acceptance ratio for more than 40% most appropriate worker nodes that were able to produce

12 VOLUME 10, 2022


K. Nguyen et al.: Parked Vehicles Task Offloading in Edge Computing

FIGURE 7. EdgePV performance on different sizes: (a) Acceptance ratio (b) Average cost (c) Average utility.

less cost while generating highest revenues, especially in


congested environments.
Furthermore, we figured out the PV availability of PVs
in relation to acceptance ratios subject to different arrival
rates of online tasks, where each arrival rate prefers different
PV availability as demonstrated in Figure 6. For precise
measurements, we fixed the availability of PVs instead of
letting the parking capacity randomly ranging from 50% up
to 85% as described in Section VI-A. There are several useful
information revealed from these results. For instance, arrival
rates (10, 20, 30, 40, 50) demands 60% availability of PVs to
reach 80% acceptance ratios, whereas the arrival rates 60 and
80 were required 80% and 100% to gain the same outcome,
respectively. These evaluations are critical to the network FIGURE 8. Acceptance ratio regarding PV availability with 100 PVs.

planners for attaining the expected KPIs by conducting


proper strategies. For example, SPs could increase user Fig. 7a and 7c depicted that both acceptance ratio and
incentives to appeal PVs to join into the network (e.g., average utility were improved up to 24% at the arrival rate
reaching full capacity), or to extend edge server capacity, 120 when the parking-lot size was increased to double.
or to offload to another cluster. Depending on particular Additionally, the offloading cost was also enhanced for more
situations, SPs can determine which strategy is the most than 16% at the same arrival rate as shown in Fig. 7b. It is
appropriate. obvious to recognize that when increasing the parking lot
In addition, we assessed our proposed GA-based algorithm size, there were more options for node selections, leading
on different sizes of the parking lot: a small size with 50 free to the increasing possibility of accepting more arrived
parking spots (denoted as EdgeGA-50) which has been done tasks while maintaining lower offloading cost. Similarly,
in previous section, a medium one with 100 free parking the average utility was also enhanced when the workloads
spots (denoted as EdgeGA-100). Both were run on the same were increasing. However, with smaller size of parking lot
network loads ranging from 10 to 120 requests per 100 time during low network congestion, EdgeGA rapidly achieved
units as similar to previous section. The main reason for this a good result for average utility, which was consistent
study are threefold: first we want to examine the scalability with the results in Fig. 7a since EdgeGA had to search a
of our proposed algorithm, and how well EdgeGA adapts smaller search space. When the network became more and
to the increase of the search space. Second, conducting this more congested with increasing workloads, specifically after
evaluation can quantify how much gains exactly we can the arrival rate of 50, EdgeGA still proved its efficiency
achieve if the capacity of parking lot is doubled. Further, in congested environments. Due to less worker nodes for
this study can provide a flexible offloading strategy for SPs selections, EdgeGA-50 performed worse than EdgeGA-100
who can statistically determine which proper parking lots that had more available worker nodes for selections. Thanks
to host their services on them. For example, SPs can select to those performance outcomes, IPs are able to determine
a medium size of parking lot at first since they anticipate which size of parking lots to maintain a balance between the
that the network can be quickly congested with large traffic generated revenues and the offloading cost.
loads, and then switch to smaller size in off-peak times or Likewise, we investigated the relationship between the
vice versa in order to balance between QoS and generated acceptance ratios and the availability of PVs as shown
revenues. Additionally, they might decide to choose one in Fig. 8. To produce 80% of the acceptance ratio for
medium parking lot instead of two small ones depending on all arrival rates, 60% availability of PVs were demanded
the workloads. against 100% of a small-size parking lot. We also fixed the

VOLUME 10, 2022 13


K. Nguyen et al.: Parked Vehicles Task Offloading in Edge Computing

PVs are able to manipulate as worker nodes. The exten-


sive evaluations shows that our collaborative infrastructure
remarkably increases the computational capability of the
existing computing architecture by efficiently making use
of the being-wasted powerful hardware of PVs. This novel
framework also gives a flexibility, agility, and reliability to
address the online task offloading problems. In addition,
we propose a GA-based algorithm to deal with the time
complexity of BIP problem and then compare our solution
with several baseline algorithms as well as on different
sizes of the parking lots. The proposed GA-based algorithm
outperformed all compared heuristics in terms of several
FIGURE 9. Average execution time on sequential and parallel modes. important performance metrics such as task acceptance ratios,
offloading cost, and accumulative rewards. Furthermore,
PV owners are able to gain extra incentives by sharing
availability of PVs for precise measurements as explained their computing resources while parking in parking lots.
in previous section. In fact, 10, 20 and 30 arrival rates Furthermore, we quantify the successfully task acceptance
quickly achieved 80% acceptance ratio when the availability ratios towards the availability of PVs on various arrival rates.
of PVs was smaller than 20%, whereas the arrival rates In fact, the evaluations are critical to SPs for making a
40, 50 and 60 demanded 40% to obtain the similar outcomes. proper decision on which offloading strategies are selected
Hereafter, the arrival rate 80 required larger than 20% to to optimize the generated revenues. The proposed GA-based
achieve the same result. Compared to the results in Fig. 6, algorithm dramatically improved the average total execution
it is reasonable for these improvements as the size of time thanks to the distributed parallel implementation when
parking lot was expanded, which means that there were more being compared with the sequential operation.
worker nodes hosting the arrived tasks so the acceptance
ratio was improved. As we mentioned in Section IV, ACKNOWLEDGMENT
we propose a distributed parallel GA-based algorithm for The authors would like to thank their academic editor
online task offloading problem in this paper. To measure and anonymous reviewers for their careful reading of their
the improvement of the proposed parallel implementation manuscript and many insightful comments and suggestions.
scheme towards reducing execution time, we compare the
execution time of our GA-based algorithm running on REFERENCES
sequential and parallel manners separately. Due to diverse [1] X. Huang, R. Yu, J. Liu, and L. Shu, ‘‘Parked vehicle edge computing:
workloads with several arrival rates, similar to several related Exploiting opportunistic resources for distributed mobile applications,’’
IEEE Access, vol. 6, pp. 66649–66663, 2018.
papers, the execution time in this manuscript was measured [2] F. H. Rahman, A. Y. M. Iqbal, S. H. S. Newaz, A. T. Wan, and M. S. Ahsan,
for a single offloading task. As a result, our proposed parallel ‘‘Street parked vehicles based vehicular fog computing: TCP throughput
framework only required 1.217ms compared to 14.725ms in evaluation and future research direction,’’ in Proc. 21st Int. Conf. Adv.
Commun. Technol. (ICACT), Feb. 2019, pp. 26–31.
sequential operation manner to successfully process a given [3] D. Han, W. Chen, and Y. Fang, ‘‘A dynamic pricing strategy for vehicle
task in average as demonstrated in Fig. 9. assisted mobile edge computing systems,’’ IEEE Wireless Commun. Lett.,
vol. 8, no. 2, pp. 420–423, Apr. 2019.
By increasing the size of the parking lot to double, the algo- [4] M. Sigalos. (Jan. 2022). This Tesla Owner Says he Mines up to
rithm finished processing a task in 2.756ms and 23.567ms $800 a Month in Cryptocurrency With his Car. [Online]. Available:
regarding parallel (_p) and sequential (_s) modes, respec- https://www.cnbc.com/2022/01/08/tesla-owner-mines-bitcoin-ethereum-
with-his-car.html
tively. The exceptional performance on the execution-time [5] O. Fadahunsi and M. Maheswaran, ‘‘Locality sensitive request distribution
was due to the distributed parallel implementation of our for fog and cloud servers,’’ Service Oriented Comput. Appl., vol. 13, no. 2,
proposed GA-based algorithm as shown in Fig. 3. The pp. 127–140, Jun. 2019, doi: 10.1007/s11761-019-00260-2.
[6] Y. Zhao, W. Wang, Y. Li, C. C. Meixner, M. Tornatore, and J. Zhang, ‘‘Edge
achieved execution time is indeed ambitious, which again computing and networking: A survey on infrastructures and applications,’’
proves that our GA-based solution is fast, efficient and IEEE Access, vol. 7, pp. 101213–101230, 2019.
practical. [7] K. Nguyen, S. Drew, C. Huang, and J. Zhou, ‘‘Collaborative container-
based parked vehicle edge computing framework for online task offload-
ing,’’ in Proc. IEEE 9th Int. Conf. Cloud Netw. (CloudNet), Nov. 2020,
VII. CONCLUSION pp. 1–6.
[8] Lightweight Kubernetes. Accessed: May 10, 2021. [Online]. Available:
This paper has studied the collaborative computation archi- https://k3s.io/
tecture where PVs are promising to become an efficient [9] (2020). What is Kubernetes? Accessed: May 28, 2020. [Online]. Available:
https://kubernetes.io/docs/concepts/overview/what-is-kubernetes/
extension for the existing cloud-edge computation paradigm [10] S. Arif, S. Olariu, J. Wang, G. Yan, W. Yang, and I. Khalil, ‘‘Datacenter at
to deal with the online task offloading in peak business the airport: Reasoning about time-dependent parking lot occupancy,’’ IEEE
hours. We also advocate the Kubernetes orchestrator that Trans. Parallel Distrib. Syst., vol. 23, no. 11, pp. 2067–2080, Nov. 2012.
[11] W. He, G. Yan, and L. D. Xu, ‘‘Developing vehicular data cloud services
can be implemented at the edge server as the master node. in the IoT environment,’’ IEEE Trans. Ind. Informat., vol. 10, no. 2,
Accordingly, the core cloud, edge computing itself, and pp. 1587–1595, May 2014.

14 VOLUME 10, 2022


K. Nguyen et al.: Parked Vehicles Task Offloading in Edge Computing

[12] F. Dressler, P. Handle, and C. Sommer, ‘‘Towards a vehicular cloud— KHOA NGUYEN received the M.Sc. degree in
Using parked vehicles as a temporary network and storage infrastructure,’’ telecommunications engineering from the Univer-
in Proc. ACM Int. Workshop Wireless Mobile Technol. Smart Cities sity of Sunderland, U.K., in 2013, and the Ph.D.
(WiMobCity). New York, NY, USA: Association for Computing Machin- degree in electrical and computer engineering
ery, 2014, pp. 11–18, doi: 10.1145/2633661.2633671. from the Department of Systems and Com-
[13] E. Al-Rashed, M. Al-Rousan, and N. Al-Ibrahim, ‘‘Performance eval- puter Engineering, Carleton University, Canada,
uation of wide-spread assignment schemes in a vehicular cloud,’’ Veh.
in 2021. His main research interests include
Commun., vol. 9, pp. 144–153, Jul. 2017. [Online]. Available: http://www.
communication networks, cloud/edge computing,
sciencedirect.com/science/article/pii/S2214209616301863
[14] T. Kim, H. Min, and J. Jung, ‘‘Vehicular datacenter modeling parked vehicle edge computing (PVEC), the Inter-
for cloud computing: Considering capacity and leave rate of net of Vehicles (IoV), software-defined networks
vehicles,’’ Future Gener. Comput. Syst., vol. 88, pp. 363–372, (SDN), network function virtualization (NFV), containerization technolo-
Nov. 2018. [Online]. Available: http://www.sciencedirect.com/science/ gies, and machine learning.
article/pii/S0167739X18300487
[15] C. Li, S. Wang, X. Huang, X. Li, R. Yu, and F. Zhao, ‘‘Parked STEVE DREW (Member, IEEE) received the B.Sc.
vehicular computing for energy-efficient Internet of Vehicles: A contract degree from Beijing Jiaotong University, in 2008,
theoretic approach,’’ IEEE Internet Things J., vol. 6, no. 4, pp. 6079–6088, the M.Sc. degree from the Chinese Academy
Aug. 2019.
of Sciences, in 2011, and the Ph.D. degree in
[16] S. Raza, W. Liu, M. Ahmed, M. R. Anwar, M. A. Mirza, Q. Sun,
and S. Wang, ‘‘An efficient task offloading scheme in vehicular edge electrical and computer engineering from Carleton
computing,’’ J. Cloud Comput., vol. 9, no. 1, p. 28, Jun. 2020, doi: University, in 2018. He was the Chief Architecture
10.1186/s13677-020-00175-w. and Security Officer at BitOcean Global and the
[17] Y. Cao, Y. Teng, F. R. Yu, V. C. M. Leung, Z. Song, and M. Song, ‘‘Delay Founder of BitQubic. He was a Senior Cloud
sensitive large-scale parked vehicular computing via software defined Engineer at Cisco NFV Group. He is currently an
blockchain,’’ in Proc. IEEE Wireless Commun. Netw. Conf. (WCNC), Assistant Professor at the Department of Electrical
May 2020, pp. 1–6. and Software Engineering, University of Calgary. His research interests
[18] X. Hou, Y. Li, M. Chen, D. Wu, D. Jin, and S. Chen, ‘‘Vehicular fog include edge computing, cloud-native initiatives towards network services,
computing: A viewpoint of vehicles as the infrastructures,’’ IEEE Trans. and blockchain services.
Veh. Technol., vol. 65, no. 6, pp. 3860–3873, Jun. 2016.
[19] X. Wang, Z. Ning, and L. Wang, ‘‘Offloading in Internet of Vehicles: A fog-
enabled real-time traffic management system,’’ IEEE Trans. Ind. Informat., CHANGCHENG HUANG (Senior Member,
vol. 14, no. 10, pp. 4568–4578, Oct. 2018. IEEE) received the B.Eng. and M.Eng. degrees in
[20] X. Huang, P. Li, and R. Yu, ‘‘Social welfare maximization in container- electronic engineering from Tsinghua University,
based task scheduling for parked vehicle edge computing,’’ IEEE Commun. Beijing, China, in 1985 and 1988, respectively,
Lett., vol. 23, no. 8, pp. 1347–1351, Aug. 2019. and the Ph.D. degree in electrical engineering
[21] K. Nguyen, S. Drew, C. Huang, and J. Zhou, ‘‘EdgePV: Collaborative from Carleton University, Ottawa, ON, Canada,
edge computing framework for task offloading,’’ in Proc. IEEE Int. Conf. in 1997. From 1996 to 1998, he worked with
Commun. (ICC), Jun. 2021, pp. 1–6. Nortel Networks, Ottawa, where he was a
[22] K. Deb, Multi-Objective Optimisation Using Evolutionary Algorithms:
Systems Engineering Specialist. He was a Systems
An Introduction. London, U.K.: Springer, 2011, pp. 3–34, doi:
10.1007/978-0-85729-652-8_1. Engineer and a Network Architect with the Optical
[23] B. Gu, X. Zhang, Z. Lin, and M. Alazab, ‘‘Deep multiagent reinforcement- Networking Group, Tellabs, Naperville, IL, USA, from 1998 to 2000. Since
learning-based resource allocation for internet of controllable things,’’ July 2000, he has been with the Department of Systems and Computer
IEEE Internet Things J., vol. 8, no. 5, pp. 3066–3074, Mar. 2021. Engineering, Carleton University, where he is currently a Full Professor.
[24] B. Gu, X. Yang, Z. Lin, W. Hu, M. Alazab, and R. Kharel, ‘‘Multiagent He won the CFI New Opportunity Award for building an Optical Network
actor-critic network-based incentive mechanism for mobile crowdsensing Laboratory in 2001. He is an Associate Editor of Photonic Network
in industrial systems,’’ IEEE Trans. Ind. Informat., vol. 17, no. 9, Communications (Springer).
pp. 6182–6191, Sep. 2021.
[25] T. Salimans, J. Ho, X. Chen, S. Sidor, and I. Sutskever, ‘‘Evolution
strategies as a scalable alternative to reinforcement learning,’’ 2017, JIAYU ZHOU (Member, IEEE) received the Ph.D.
arXiv:1703.03864. degree in computer science from Arizona State
[26] F. Petroski Such, V. Madhavan, E. Conti, J. Lehman, K. O. Stanley, and University, in 2014. He is currently an Associate
J. Clune, ‘‘Deep neuroevolution: Genetic algorithms are a competitive Professor at the Department of Computer Science
alternative for training deep neural networks for reinforcement learning,’’ and Engineering, Michigan State University. His
2017, arXiv:1712.06567. research has been funded by the National Science
[27] H. Mühlenbein, ‘‘Parallel genetic algorithms in combinational opti- Foundation, the National Institutes of Health, and
mization,’’ in Computer Science and Operations Research, O. Balci, the Office of Naval Research, and published more
R. Sharda, and S. A. Zenios, Eds. Amsterdam, The Netherlands: Pergamon, than 100 peer-reviewed journals and conference
1992, pp. 441–453. [Online]. Available: http://www.sciencedirect.com/ papers in data mining and machine learning. His
science/article/pii/B9780080408064500344 research interests include large-scale machine learning, data mining, and
[28] M. Mitchell, An Introduction to Genetic Algorithms. Cambridge, MA,
biomedical informatics, with a focus on the transfer and multi-task learning.
USA: MIT Press, 1998.
[29] Q. Lu, K. Nguyen, and C. Huang, ‘‘Distributed parallel algorithms He was a recipient of the National Science Foundation CAREER Award
for online virtual network embedding applications,’’ Int. J. Commun. (2018). His papers received the Best Student Paper Award in 2014 IEEE
Syst., p. e4325, Jan. 2020. [Online]. Available: https://onlinelibrary.wiley. International Conference on Data Mining (ICDM), the Best Student Paper
com/doi/abs/10.1002/dac.4325 Award at the 2016 International Symposium on Biomedical Imaging (ISBI),
[30] H. Zhu and C. Huang, ‘‘VNF-B&B: Enabling edge-based NFV with CPE and the Best Paper Award at the 2016 IEEE International Conference on Big
resource sharing,’’ in Proc. IEEE 28th Annu. Int. Symp. Pers., Indoor, Data (BigData).
Mobile Radio Commun. (PIMRC), Oct. 2017, pp. 1–5.

VOLUME 10, 2022 15

You might also like