Parked Vehicles Task Offloading in Edge Computing
Parked Vehicles Task Offloading in Edge Computing
Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000.
Digital Object Identifier 10.1109/ACCESS.2022.3167641
ABSTRACT The analytical research has recently indicated that the computational resources of Connected
Autonomous Vehicles (CAVs) have been wasted since almost all vehicles spend over 95% of their time
in parking lots. This paper presents a collaborative computing framework to efficiently offload online
computational tasks to parked vehicles (PVs) during peak business hours. To maintain the service continuity,
we advocate for integrating Kubernetes-based container orchestration to leverage its advanced features (e.g.,
auto-healing, load balancing, and security). We analytically formulate the task-offloading problem and then
propose an intelligent meta-heuristic algorithm to dynamically deal with online heterogeneous demands.
Additionally, we take a cumulative incentives model into account, where the PV owners are able to earn profit
by sharing their computation resources. We also compare our algorithm with several existent heuristics on
different sizes of the parking lot. Extensive simulation results show that our proposed computing framework
significantly increases the possibility of accepting the online tasks and improves average task offloading cost
by at least 40%. Besides, we quantify the PV availability by task acceptance ratios, which can be a critical
criterion for network planners to achieve desired network service goals.
INDEX TERMS Parked vehicles, cloud computing, edge computing, collaborative cloud-edge computing,
online task offloading, container orchestration, Kubernetes.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
VOLUME 10, 2022 1
K. Nguyen et al.: Parked Vehicles Task Offloading in Edge Computing
to handle online task requests. Thanks to the high availability The contents of our paper are divided into the following
of cloud and edge nodes, when a task arrives at the edge sections. Section II presents the related work while the
server, all the replicas of a task can be scheduled at the formulated offloading problem is introduced in Section III.
same cloud or edge node or both. They can also be allocated Section IV proposes the GA algorithm based on the
on distributed PVs to exploit the idle resources of PVs to problem formulation. Thereupon, the simulation evaluation
save the network cost, reducing heavy workloads in the core is demonstrated in Section VI. Finally, Section VII concludes
networks during peak hours. All master nodes and their our work.
worker nodes form a container orchestration cluster where the
control plane in the master node is responsible for managing II. RELATED WORK
worker nodes and pods in the cluster, monitoring the state of PVs as infrastructure has currently attracted a lot of research
the cluster, and making global decisions towards the cluster attention as they enable the existing computation paradigm
such as scheduling, scaling. The master node’s scheduler for computation, communication, and storage (CCS) to be
carries out pod placements on a set of available worker expanded. Deploying PVs as vehicular cloud computing
nodes. When an online task with rigid constraints (e.g., CPU, on the Internet of Vehicles has been well investigated
BW, latency tolerance, replicas) arrives in the master node, in [10]–[16].
a kube-scheduler in the control plane of the master node Arif et al. in [10] presented a basic model of a vehicular
will create pods and then assign the worker nodes for them cloud (VC) assisted by PVs in a specific international airport.
to run on. Interested readers can find more details about In contrast, He et al. in [11] proposed a multilayered vehicular
Kubernetes in [9]. Our proposed collaborative paradigm cloud infrastructure that was relied upon the cloud computing
strives for improve the elasticity and agility of the existing and Internet of Thing (IoT) technologies. The paper was
computing infrastructure to minimize the service disruptions based on the prediction of the parking occupancy in order
caused by the unforeseen mobility of PVs. In fact, all PVs to schedule the network resources, and eventually allocated
would be completely electric-based, which could be enabled the computational tasks. The smart parking and vehicular
the automatically-charging feature during their parking in data-mining services in IoT environment, were investigated.
near future. Additionally, this paper considers a generic Likewise, establishing a VC built-in PVs as spatial-temporal
scenario in which one base station covers all PVs within its network infrastructure for CCS in a parking lot was studied
coverage of a parking lot. Thus, the control signalling (e.g., in [12]–[14], [17]. Li et al. in [15] paid attention to
MCS, resource management, QoS, etc.) between BS and PVs the PVs feasibility as a computing framework, and then
over radio interface is neglected for simplification. presented an incentive mechanism considering accumulated
Our contributions are summarized below: rewards of PVs when they were trading their resources.
• We propose a collaborative computation paradigm Furthermore, Hou et al. in [18] introduced a vehicular fog
integrating the core cloud, edge, and PVs into a computing (VFC) paradigm exploiting the connected PVs as
consolidated architecture to address the online task the infrastructure at the edge to process real time network
offloading problem in peak hours. services. In a similar approach, the authors in [19] considered
• A container orchestration framework relied upon the a fog computing infrastructure implemented on the Internet
Kubernetes platform is leveraged to deal with the of Vehicle (IoV) systems to offer the computing resources
uncertain parking duration of PVs. Kubernetes platform to end-users concerning latency constraint. This proposed
provides non-disruptive services and minimizes the architecture allowed the network traffic to be offloaded in real
possible service interruptions due to an advanced self- time to the fog-based IoV systems subject to optimizing the
healing feature. When a worker node is out-of-service, average response time.
Kubernetes reallocates the corresponding task replicas Moreover, Parked Vehicle Edge Computing (PVEC),
into another active node automatically. in which PVs were recognized as accessible edge computa-
• We also formulate the online task offloading model as tion nodes to address the task allocation problem, has been
a BIP problem to minimize the offloading cost whilst researched in [1], [3], [20]. Huang et al. in [1] exploited
optimizing cumulative rewards of PVs by selling their the possible opportunistic resources for allocating the com-
idle resources considering the replicas as a constraint. putational tasks in a collaborative architecture consisting of
• A meta-heuristic algorithm that relied on a Genetic vehicle edge computing (VEC) server and PVs. The authors
Algorithm, namely EdgeGA, is proposed to deal with addressed the optimization problem of user payments by
the time complexity as well as scalability problems of relaxing budget or latency constraints, which resulted in
BIP. We simulate the task-offloading model in respect suboptimal solutions for the proposed scheme. In similarity,
to the random mobility behaviors of PVs under dynamic the paper [3] suggested a dynamic pricing approach to reduce
and arbitrary task arrivals. EdgeGA is compared with the average cost whilst satisfying QoS constraints. This
existing heuristics to prove its efficiency on different strategy calibrate the price constantly following the current
generic parking lots in performance simulation. We also system state. Besides, a containerized task scheduling scheme
propose a distributed parallel scheme for running the assisted by PVEC was presented in [20] considering the
EdgeGA algorithm to reduce the execution time. formulated social welfare optimization for users and PVs at
the same time. Raza et al. in [16] studied a vehicle-assisted TABLE 1. List of acronyms and notations.
MEC architecture combining the core cloud, MEC, and
mobile volunteering vehicles (e.g., buses) to deal with IoT
devices’ task requests. The paper [16], at the first look,
is analogous to the idea of our paper. Nevertheless, this
research mainly targets the online task offloading problems
in a container-based computation paradigm regarding the
allocation problem for the set of online task replicas of
a given VNR. Our solution takes both average network
cost and accumulating incentives gained by selling idle
computation resources of PVs into account. Moreover, PVs in
the EdgePV framework would be more common and reliable
than buses, thanks to their popularity and less mobility. Our
previous work in [7] addressed the online task offloading
problem in a collaborative architecture, but [7] allowed to
allocate all task replicas in to the same node, exposing to
another fatal problem that the vehicle that is hosting all
replicas of a task suddenly leaves from the parking lot, the
service provisioning will be interrupted. Additionally, [7]
proposed a simple heuristic algorithm, named M&M, which
ranked the worker nodes based on the offloading cost and
revenue. The transmission data rate was randomly assigned
instead of being dependent on the distance between PVs
and BS. [7] only considered a medium size of the parking
lot.
This paper is an extension of [21] where we expand the
related work by analyzing more relevant papers with their
specific strengths and limitations. To provide the illustration
of our solutions, we depict an example of a simple task
offloading scenario and then provide pseudocode for all algo-
rithms. Furthermore, we extend to double the size of a generic
parking lot in [21], compare the offloading performance
on different sizes and then quantify them accordingly. This
makes our proposed solution more comprehensive. We also
carry out a further analysis on the achieved simulation
results, and make an execution-time comparison of EgdePV
algorithm adopted in both sequential and parallel manners
to demonstrate the efficiency of the proposed distributed
parallel deployment.
A. OFFLOADING MODEL
We investigate CPU, memory, and bandwidth resources in the network under a containerized cluster in this paper can be
online task offloading problems. There are various types of quickly realized as a star topology in which the root and
worker nodes, comprising the core cloud, edge server, and its leaves are the master node and several worker nodes,
PVs, where they are connected to a master node located in respectively. Fig. 1 illustrates a generic outdoor parking lot.
the edge server through separate connections. For instance, PVs need to register all vehicular information such as owner’s
the link between the master node and the core cloud is optical, ID, available parking time, license plate, available resources
whereas PVs connect to the edge server through wireless links (e.g., computing capacity, storage) with their corresponding
in which the available bandwidth are primarily dependent SPs. Then, they could keep their current vehicular status
on the distances between PVs and BS. Therefore, the edge updated by sending these information to their corresponding
B. SYSTEM MODEL
In this paper, it is assumed that the latency is inversely
proportional to the remaining capacity as the consequence of
the M/M/1 queuing model.
WM m(kj )
frequency-division multiple access (OFDMA) scheme as 4Me (kj ) = X eX (12)
similar to [16]. dbs,p denotes the geographical distance Me − m(kj0 ) + δ
between the BS and the pth PV. The path loss between them is k 0 ∈Ke j≤|ð(k 0 )|
−σ
defined by dbs,p and white Gaussian noise power N0 , in which Total offloading cost of a task replica at the edge is:
σ factor expresses the path loss exponent. Hence, the wireless
channel can be modeled as a frequency-flat block-fading 4ekj = 4Ce (kj ) + 4Me (kj ) (13)
Rayleigh fading channel, denoted as h. As a result, the data Similarly, the cost of offloading a task replica and the energy
rate of pth PV is calculated as: consumption at a parked vehicle is formulated as follows:
PTX .dbs,p
−σ
.|h2 | WCp χkj fkj
ξp = Bp log2 (1 + ) (5) 4Cp (kj ) = (14)
N0 + I Cp −
X X
c(kj0 ) + δ
where Bp , PTX and I are the channel bandwidth, transmission k 0 ∈Kp j≤|ð(k 0 )|
power of BS, and inter-cell interference, respectively. The WMp m(kj )
offloading latency of PVs tp (k) is then formulated as follows: 4Mp (kj ) = X X (15)
Mp − m(kj0 ) + δ
χkj χkj fkj
tp (k) = max { + p + Th } ≤ tm (k) (6) k 0 ∈Kp j≤|ð(k 0 )|
j∈ð(k) E[ξp ] RC (ni ) χkj
We also investigate the offloading efficiency of two 4Bp (kj ) = WBp , ∀k ∈ Kp (16)
tm (kj )ξp ;
types of online containerized tasks, comprising the Ep (kj ) = χkj fkj ep (17)
latency-sensitive and latency-insensitive akin to [5]. While
the latency-sensitive tasks are merely allocated to the edge where ep is a coefficient, which is attained by:
nodes (e.g., edge server, PVs) due to their closest proximity, p
ep = (RC (ni ))2 (18)
but the latency-insensitive tasks are processed at any worker
nodes such as the remote cloud, edge server, or PVs. Then, where denotes an energy coefficient.
we compute the cost for offloading an online task replica in Hereafter, total offloading cost of each task replica kj at a
our proposed collaborative computing architecture, which is parked vehicle is:
associated with the sum of total CPUs, memory, bandwidth, p
4kj = 4Cp (kj ) + 4Mp (kj ) + 4Bp (kj ) + ς Ep (kj ); (19)
and energy consumption for processing task replicas at PVs.
In fact, the remote cloud and edge server achieve a high where ς is an energy cost coefficient.
energy efficiency, so we do not consider this attribute in their
costs. The offloading cost at the core cloud is defined as 3) PVs’ UTILITY
follows: Owners of PVs are encouraged to share the idle computa-
WCc χkj fkj tional resources while parking in parking lots, so they can
4Cc (kj ) = X X (7) gain accumulative rewards by hosting the task replicas in their
Cc − c(kj0 ) + δ
vehicles. ϕ p is the rewards by processing a task replica at a
k 0 ∈Kc j≤|ð(k 0 )|
parked vehicle p. Thus, the corresponding utility is defined as
WM m(kj )
4Mc (kj ) = X cX (8) follows:
Mc − m(kj0 ) + δ
$ p = ϕ p − ρEp (kj ) (20)
k 0 ∈Kc j≤|ð(k 0 )|
χ (k )
WBc tm (kjj ) where ρ denotes a coefficient of energy price, and ϕ p is
4Bc (kj ) = (9) presented as:
X X χ (kj0 )
Bc − +δ ϕ p = µrpc χkj fkj + rpm m(kj ) (21)
tm (kj0 )
k 0 ∈Kc j≤|ð(k 0 )|
where rpc and rpm are the unit prices of CPU and memory,
where δ is a small positive number to prevent dividing by respectively. We can see that minimizing the offloading cost
zero. The offloading cost for processing a task replica at the can directly maximize the profits gained by hosting the
core cloud is: according task replicas at PVs.
4ckj = 4Cc (kj ) + 4Mc (kj ) + 4Bc (kj ) (10) Variables:
(
As we discussed earlier, when a task is handled at the 1, kj deployed on cloud, ∀j ≤ |ð(k)|.
Ackj =
edge server, this is widely recognized as a local processing. 0, otherwise.
Therefore, the offloading cost at the edge server is computed
(
1, kj deployed on edge, ∀j ≤ |ð(k)|.
as follows: Aekj =
WCe χkj fkj 0, otherwise.
4Ce (kj ) = (11) (
1, kj deployed on a PV, ∀j ≤ |ð(k)|.
X X
Ce − c(kj0 ) + δ p
Akj =
k 0 ∈Ke j≤|ð(k 0 )| 0, otherwise.
b: SELECTION
The selection operation essentially determines which chro-
mosomes to become parents for the crossover operator.
To enhance the parallelism, parents can be randomly selected
FIGURE 3. Distributed and parallel GA-based implementation. from the initial population with a replacement. Due to
the nature of randomness, the quality of children, that are
while retaining the population diversity. GA operations can generated in the crossover operator, can be better or even
be rerun until the pre-defined stopping condition is met (e.g., worse than their parents. In theory, there are several selection
several iterations). Lately, parallel computing is a promising strategies, but the fitness-based proportionate designation
paradigm to efficiently deal with the complicated problems relied upon the accumulative sum of fitness-relative weights
with huge time-saving and lower cost guarantees by enabling is usually preferable in this operator.
concurrency.
c: FITNESS FUNCTION
B. DISTRIBUTED PARALLEL GENETIC ALGORITHM The major goals of our proposed algorithm include optimiz-
For solving the BIP problem, a distributed and paral- ing the cost of offloading online tasks and maximizing the
lel GA-based algorithm running on multiple independent user rewards when the task is offloaded to PVs. To achieve
machines, denoted as V is proposed to discover the search these dual objectives, fitness function is utilized to evaluate
space in this paper. The operational implementation of our the quality of an offloading solution, and a better solution
proposed algorithm is demonstrated in Fig 3 where |V | is could produce higher fitness values in this paper.
defined to be 16. The working scheme includes a master X 1 1 e 1 p p
node that primarily plays a synchronization role, and several F(k) = Ac + A + ((1 − η) p + ηϕk )Akj
4ck kj 4ek kj 4 k
distributed slave nodes exploring as many feasible solutions j∈ðk
as they can. At each slave node, GA iteratively deploys its (31)
few operators to seek for the feasible offloading solution.
The best outcomes based on the fitness values among the 2) THE PROCESSES OF EVOLUTION
distributed parallel nodes are then synchronized by the master After initial population is generated in the initialization
node to identify the optimal offloading solution for the given operator, two chromosomes are selected in random to be
task. Our proposed algorithm in this research is permitted to parents. Then, new generations are formed by an evolutionary
offload multiple task replicas at once rather than allocating process including the crossover and mutation operators.
each replica sequentially. To maintain the population diversity, the newly generated
generations are updated into the existing population. This
1) GENETIC REPRESENTATION AND SELECTION strategy is able to improve the opportunity to obtain
a: CHROMOSOME near-optimal task offloading solutions.
GA’s a chromosome denoted as Cf in this paper indicates
a solution for the whole set of requested replicas of a a: CROSSOVER
given task, which is randomly chosen from the available This is considered as the most vital operator to create new
worker nodes meeting task resource demands. Hence, offspring by stitching up the parental chromosomes in GA.
each gene within the chromosome represents an offloading Suppose Cs and Cr are two parental chromosomes that
solution for a single task replica, which is described as have particular indexes s and r in the initial population.
Denote jc as a random crossover point within N length, Algorithm 1 EdgeGA - An Intelligent GA-Based Algorithm
their corresponding descendants are C(M +1) and C(M +2) . 1: Input:
By exchanging genes beginning from the crossover point 2: An online task k with five tuples
jc + 1 to the last gene between the parents, new generations {c(k), m(k), b(k), tm (k), ð(k)}
are generated as below: 3: Output:
1 jc jc +1 4: A list of worker nodes hosting task replicas.
C1
g1 · · · g1 g1 · · · gG
1 5: procedure task offloading
. .. . .. ..
.. .. . .. . F Step 1: Generate a list ζk of node candidates
. .
including cloud, edge, PVs
jc jc +1
Cs g1s · · · gs gs G
· · · gs
6: function GET_CANDIDATES (k)
.. .. .. . .. ..
. ..
. . . . empty ζk
7:
c c
P = Cr = g1 j j +1 (32) for all ni ∈ N do
r · · · gr gr · · · gG
r
8:
. . .. . .. .. if RuC (ni ) ≥ c(k), RuM (ni ) ≥ m(k),
.. .. . .. .
.
RuB (ni ) or ξp ≥ b(k) then
CM 1 jc c
j +1
g · · · gM gM G
· · · gM add ni to ζk
9:
CM +1 M1
j c c
j +1
· · · gG end for
gs · · · gs gr 10:
r
CM +2 jc jc +1 11: return ζk
g1r · · · gr gs · · · gG
s
if ( none of worker nodes are available) then
b: MUTATION 12: reject the task k
This operator applies a small modification on the current 13: end function
parent to form new offspring/chromosome. The mutation FStep 2: Deploy Genetic Algorithm in a distributed
stage allows to sample the large search space, improving parallel operation scheme
the searching efficiency. This operation is widely known 14: call Algorithm 2
a primary component in the evolutionary process, which FStep 3: Synchronize all incumbents obtained in
prevents potential solutions from falling into the local optima. independent working machines
Technically, a new gene selected randomly replaces an 15: Choose the best solution relied upon the sum of
existing one within one of children produced in Crossover fitness values (E.q. 31)
operator to create a new offspring. The gene must inevitably 16: return the list of worker nodes ni ∈ N
satisfy the resource demands to survive through the feasibility F Step 4: Update SN resources
17: end procedure
check. If both children are infeasible in Crossover, one
of parents chosen in random is then used form mutation.
j
Suppose jm is a random mutation point and gr 0 is a new solutions obtained within t times. Eventually, the feasible
gene that substitutes the existing gene within C(M +1) . Conse- solutions found from several slave machines is finalized
quently, new offspring generated from the mutation stage is through a synchronization in order to choose the optimal
jm
C 0 (M +1) = [g1s · · · gr 0 · · · gG
s ]. offloading solution relied upon fitness values. If accepted,
To maintain a balance between exploitation and explo- task replicas of the given task are then allocated to the worker
ration in GA algorithm, crossover rate pc is typically set nodes following the information of the achieved offloading
higher than mutation rate pm . Determining pm is never solution. SN eventually updates the rnetwork resources to
an easy task since small mutation rate leads to premature finish the offloading processes.
convergence while high mutation rate could improve the The technical details of the proposed GA-based algorithm
exploration process in the search space, but this selection are provided in Algorithm 1 and 2. When an online task
might prevent GA algorithm from converging to optimal including a number of required replicas arrives, the algorithm
solution. By preferring high efficiency of GA while keeping creates a list of potential node candidates which must meet
a trade-off between exploitation and exploration, we set resource requirements of the given task demands (e.g.,
pc = 0.9 and pm = 0.2 in this paper. CPU, memory, bandwidth, delay) as shown in lines [6-13].
GA is then implemented in a single working machine in a
3) TERMINATIONS AND SYNCHRONIZATION distributed parallel operation scheme in order to seek the best
Parallel processing is associated with multiple concurrent offloading solution for a given task by calling Algorithm 2
processes in which each process might accomplish its in line 14. Lines [15-16] are the synchronization process that
assignment at a different time. Unfortunately, waiting for selects the optimal offloading solution among the outcomes
all tasks to completely finish their assigned jobs is painful of the parallel machines, and eventually updating the network
due to the fact that one or more tasks might take too much information status in Step 4. In terms of Algorithm 2,
time for processing (e.g., deadlock). Thus, to reduce the lines [4-13] are associated with population initialization
overall execution time, the master node will terminate GA where each chromosome is randomly generated from the list
algorithms running at worker nodes if there is no better of node candidates. By selecting parents from the population
Algorithm 2 GA Runs at Each Paralleled Machine chromosome (e.g., due to network congestion), this will
1: Input: ζk become the final offloading solution in line 34; otherwise,
2: Output: The best offloading solution for the task k the task will be rejected in line 36.
3: procedure Genetic Algorithm operations
FInitial Population Generation C. EXECUTION TIME ANALYSIS
4: r =0
5: for m = 1 to M do Due to the lower cost of computing hardware recently,
F Generate a chromosome with |ð(k)| genes. Each parallel algorithms can be beneficially exploited to tackle
j intricate computational tasks. As a result, we advise a
gene is a task offloading solution for a replica gf =
c,f e,f 1,f |P|,f
Ak j Ak j A k j · · · Ak j distributed parallel GA framework to deal with the online
6: for n = 1 to |ð(k)| do task offloading problem. In this paper, the execution time
F Try to map a task replica to a randomly selected of the proposed task offloading solution is measured in two
worker node in ζk with up to Q trials manners: sequential and parallel modes. In sequential mode,
7: for q = 1 to Q do the time complexity follows a linear increase as we can see
8: Map a replica to a randomly selected that the execution time is the sum of the operation time
worker node in ζk
9: if feasible goto 11 of all working machines. However, the total execution time
10: end for of the parallel mode is estimated at the latest machine that
11: end for finishes its offloading assignment. The time complexity of
12: r = r + 1 and add the chromosome to population GA algorithm at each machine is roughly O(G × M ×
13: end for maxIterations). In fact, the representation of GA algorithm
FEvolution process
14: if r > 1 then at each working machine is not static, which is depended
15: for p = 1 to maxIterations do on the number of replicas of a given task. In addition,
16: if ranNum ∈ (0, 1) < pc then we cannot always guarantee to achieve M chromosomes when
17: Select two parents in random the SN becomes increasingly congested. In GA algorithm,
18: Conduct crossover operation the iteration process is terminated earlier if the best fitness
19: if both parents are feasible then
20: One of children is randomly chosen value does not change for t consecutive iterations. It would
for mutation be better to measure the time complexity by measuring the
21: r = r + 2 and add them to population average runtime and to indicate how the parallel manner is
22: else enhanced in a comparison with sequential one.
23: if only a child is feasible Similar to [29], we apply Cramer-Chernoff technique and
24: r = r + 1 and add the one to
population Jensen’s inequality to provide a reasonable approximation
25: if ranNum ∈ (0, 1) < pm then to the total execution time of the parallel mode. Hence, our
26: if both children in crossover are distributed parallel offloading framework is able to indeed
infeasible then enhance the time complexity from linear to logarithmic scale
27: One parent is randomly selected for subject to |V |. Interested readers may refer to [29] further
mutation
28: end if theoretical analysis.
29: Conduct Mutation operation
30: if new mutated child is feasible then V. COMPARED ALGORITHMS
31: r = r + 1 and add the one to We evaluate the efficiency of not only our proposed
population collaborative framework compared with conventional com-
32: end for
33: if r > M eliminate chromosomes produced lower puting paradigms including cloud and edge computing, but
fitness values also our GA-based algorithm in a comparison with some
34: else if r = 1 then the current offloading solution will heuristic algorithms, comprising Baseline_1, Baseline_2, and
be final. Baseline_3 towards the acceptance ratio, offloading cost,
35: else and utility. Baseline_1 is considered as a Kubernetes default
36: reject the task k
37: end if scheduler applying the filtering and then scoring algorithms,
38: end procedure whereas Baseline_2 processes the task replicas by randomly
selecting the worker nodes. In contrast, Baseline_3 deploys a
branch and bound strategy to tackle the given task with a set
in random as shown in line 17, we try to balance the of replicas sequentially [30]. Different from these heuristics,
exploration and exploitation. Lines [14-37] involve the GA’s our proposed GA-based solution enables a set of all task
evolution operations by exploring the searching space. The replicas to be processed at once. To remain the service
Crossover operator is conducted in lines [16-24], whereas stability and reliability, a proportional number of replicas can
lines [25-31] are the Mutation operator. Line 33 is to maintain be solely offloaded to a single worker node, which cannot
the elite population by eliminating the chromosomes produc- exceed 50% (except cloud and edge nodes). Indeed, SPs is
ing the lowest fitness values to remain the population at most able to easily adjust this parameter to meet their specific
M chromosomes. In case that we only achieve one feasible goals (e.g., in network congestion). Several performance
TABLE 2. Simulation parameter settings. in comparison with Cloud-Edge and Cloud infrastructures
at the arrival rate of 120, respectively. Additionally, Cloud
or edge infrastructure performed worse due to their limited
computation capacity in peak hours. Moreover, our proposed
collaborative framework significantly saved the offloading
cost when being compared to other infrastructures as
demonstrated in Fig. 4b. These results in Fig. 4a and 4b come
from the facts that the collaborative framework exploited not
only typical cloud and edge computing capacities, but also
those of available PVs, allowing more online task requests
processed. It also deployed the GA-based algorithm for
optimizing the offloading cost, so the proposed infrastructure
produced less cost compared to others. Similarly, cloud-Edge
enabled the computing resources of both cloud and edge
computing, which helps cloud-edge paradigm perform better
than separate cloud or edge paradigm which could only utilize
their own capacity separately. However, due to its merged
computing capacities, cloud-edge framework generated more
offloading cost than others except cloud infrastructure. The
core cloud indeed processed more tasks than the edge
computing due to its larger computation capacity, but it also
bore more cost than the edge.
In Fig. 5a, Baseline_1 performed worst in terms of the
average offloading cost because of its offloading strategies
with a simple heuristic filtering and scoring algorithm.
Baseline_1 first carried out the filtering procedure to select
the feasible nodes that met the task requirements, then
cored them based on their current properties (e.g, computing
parking duration of PVs follows the Poisson distribution with resources). The node with the highest scores that was matched
λ = 3600. The simulation runs for almost 8 hours following the task demands was selected. It did not take any offloading
the common pattern of business working time in peak hours, cost or utility factor into account. In contrast, Baseline_2
and the simulator indeed updates the PV availability for every was based upon the random mechanism for selecting the
20 minutes. As mentioned, the online tasks can be commonly worker nodes and, had a better performance than Baseline_1.
divided into latency-sensitive and latency-insensitive tasks; And, Baseline_2 tended to perform well when the network
thus, when the latency tolerance of a task exceeds 20 ms, was less congested; thus, it had more options for preference.
it is marked as a latency-sensitive request. In this paper, the Baseline_3 was primarily aimed at optimizing the offloading
offloading task requests arrive in the network following the cost, so it performed best amongst the baseline algorithms;
Poisson process with an average rate varying from 10 to and its performance was indeed very comparative to the
120 requests per 100 time units. Each online task request proposed GA-based algorithm.
has an exponentially distributed lifetime with an average Fig. 5a is revealed that EdgeGA’s performance was still
of µ = 1200 time units. These workloads are extremely better than Baseline_3 prior to the arrival rate 80, and
extensive for evaluating the proposed framework as well performed slightly similar afterwards. It is because the online
as the compared algorithms. Besides, energy coefficient , tasks were most likely offloaded to PVs, producing lower
coefficient for energy price ρ and unit price for each offloading cost. Towards utility as depiected in Fig. 5b,
CPU cycle σ are set to 10−24 , 0.003 and 2 × 10−9 [17], EdgeGA defeated the heuristics following Baseline_2, Base-
respectively. Other simulation parameters are detailed in line_3, and Baseline_1, respectively. In fact, EdgeGA took
Table 2. the offloading cost as well as the utility into account, driven
by the efficient fitness function (31), while the baseline
B. PERFORMANCE RESULTS algorithms did not consider utility in their node selection
In terms of the small-size parking lot, evaluation results are strategies. In Fig. 5b, Baseline_2 performed better than
shown in Fig. 4, 5 and 6, whereas those of medium-size EdgeGA before the arrival rate of 40 because Baseline_2
parking lot are illustrated in Fig. 7 and 8. Finally, Fig. 9 had more node options for offloading the tasks when the
depicts the execution time of the proposed GA-based network was less congested; however, starting from the
algorithm measured in different sizes of the parking lots. arrival rate of 40 afterwards, EdgeGA outperformed all
Fig. 4a indicates that our collaborative paradigm remarkably compared algorithms since EdgeGA smartly searched for the
enhanced the average acceptance ratio for more than 40% most appropriate worker nodes that were able to produce
FIGURE 7. EdgePV performance on different sizes: (a) Acceptance ratio (b) Average cost (c) Average utility.
[12] F. Dressler, P. Handle, and C. Sommer, ‘‘Towards a vehicular cloud— KHOA NGUYEN received the M.Sc. degree in
Using parked vehicles as a temporary network and storage infrastructure,’’ telecommunications engineering from the Univer-
in Proc. ACM Int. Workshop Wireless Mobile Technol. Smart Cities sity of Sunderland, U.K., in 2013, and the Ph.D.
(WiMobCity). New York, NY, USA: Association for Computing Machin- degree in electrical and computer engineering
ery, 2014, pp. 11–18, doi: 10.1145/2633661.2633671. from the Department of Systems and Com-
[13] E. Al-Rashed, M. Al-Rousan, and N. Al-Ibrahim, ‘‘Performance eval- puter Engineering, Carleton University, Canada,
uation of wide-spread assignment schemes in a vehicular cloud,’’ Veh.
in 2021. His main research interests include
Commun., vol. 9, pp. 144–153, Jul. 2017. [Online]. Available: http://www.
communication networks, cloud/edge computing,
sciencedirect.com/science/article/pii/S2214209616301863
[14] T. Kim, H. Min, and J. Jung, ‘‘Vehicular datacenter modeling parked vehicle edge computing (PVEC), the Inter-
for cloud computing: Considering capacity and leave rate of net of Vehicles (IoV), software-defined networks
vehicles,’’ Future Gener. Comput. Syst., vol. 88, pp. 363–372, (SDN), network function virtualization (NFV), containerization technolo-
Nov. 2018. [Online]. Available: http://www.sciencedirect.com/science/ gies, and machine learning.
article/pii/S0167739X18300487
[15] C. Li, S. Wang, X. Huang, X. Li, R. Yu, and F. Zhao, ‘‘Parked STEVE DREW (Member, IEEE) received the B.Sc.
vehicular computing for energy-efficient Internet of Vehicles: A contract degree from Beijing Jiaotong University, in 2008,
theoretic approach,’’ IEEE Internet Things J., vol. 6, no. 4, pp. 6079–6088, the M.Sc. degree from the Chinese Academy
Aug. 2019.
of Sciences, in 2011, and the Ph.D. degree in
[16] S. Raza, W. Liu, M. Ahmed, M. R. Anwar, M. A. Mirza, Q. Sun,
and S. Wang, ‘‘An efficient task offloading scheme in vehicular edge electrical and computer engineering from Carleton
computing,’’ J. Cloud Comput., vol. 9, no. 1, p. 28, Jun. 2020, doi: University, in 2018. He was the Chief Architecture
10.1186/s13677-020-00175-w. and Security Officer at BitOcean Global and the
[17] Y. Cao, Y. Teng, F. R. Yu, V. C. M. Leung, Z. Song, and M. Song, ‘‘Delay Founder of BitQubic. He was a Senior Cloud
sensitive large-scale parked vehicular computing via software defined Engineer at Cisco NFV Group. He is currently an
blockchain,’’ in Proc. IEEE Wireless Commun. Netw. Conf. (WCNC), Assistant Professor at the Department of Electrical
May 2020, pp. 1–6. and Software Engineering, University of Calgary. His research interests
[18] X. Hou, Y. Li, M. Chen, D. Wu, D. Jin, and S. Chen, ‘‘Vehicular fog include edge computing, cloud-native initiatives towards network services,
computing: A viewpoint of vehicles as the infrastructures,’’ IEEE Trans. and blockchain services.
Veh. Technol., vol. 65, no. 6, pp. 3860–3873, Jun. 2016.
[19] X. Wang, Z. Ning, and L. Wang, ‘‘Offloading in Internet of Vehicles: A fog-
enabled real-time traffic management system,’’ IEEE Trans. Ind. Informat., CHANGCHENG HUANG (Senior Member,
vol. 14, no. 10, pp. 4568–4578, Oct. 2018. IEEE) received the B.Eng. and M.Eng. degrees in
[20] X. Huang, P. Li, and R. Yu, ‘‘Social welfare maximization in container- electronic engineering from Tsinghua University,
based task scheduling for parked vehicle edge computing,’’ IEEE Commun. Beijing, China, in 1985 and 1988, respectively,
Lett., vol. 23, no. 8, pp. 1347–1351, Aug. 2019. and the Ph.D. degree in electrical engineering
[21] K. Nguyen, S. Drew, C. Huang, and J. Zhou, ‘‘EdgePV: Collaborative from Carleton University, Ottawa, ON, Canada,
edge computing framework for task offloading,’’ in Proc. IEEE Int. Conf. in 1997. From 1996 to 1998, he worked with
Commun. (ICC), Jun. 2021, pp. 1–6. Nortel Networks, Ottawa, where he was a
[22] K. Deb, Multi-Objective Optimisation Using Evolutionary Algorithms:
Systems Engineering Specialist. He was a Systems
An Introduction. London, U.K.: Springer, 2011, pp. 3–34, doi:
10.1007/978-0-85729-652-8_1. Engineer and a Network Architect with the Optical
[23] B. Gu, X. Zhang, Z. Lin, and M. Alazab, ‘‘Deep multiagent reinforcement- Networking Group, Tellabs, Naperville, IL, USA, from 1998 to 2000. Since
learning-based resource allocation for internet of controllable things,’’ July 2000, he has been with the Department of Systems and Computer
IEEE Internet Things J., vol. 8, no. 5, pp. 3066–3074, Mar. 2021. Engineering, Carleton University, where he is currently a Full Professor.
[24] B. Gu, X. Yang, Z. Lin, W. Hu, M. Alazab, and R. Kharel, ‘‘Multiagent He won the CFI New Opportunity Award for building an Optical Network
actor-critic network-based incentive mechanism for mobile crowdsensing Laboratory in 2001. He is an Associate Editor of Photonic Network
in industrial systems,’’ IEEE Trans. Ind. Informat., vol. 17, no. 9, Communications (Springer).
pp. 6182–6191, Sep. 2021.
[25] T. Salimans, J. Ho, X. Chen, S. Sidor, and I. Sutskever, ‘‘Evolution
strategies as a scalable alternative to reinforcement learning,’’ 2017, JIAYU ZHOU (Member, IEEE) received the Ph.D.
arXiv:1703.03864. degree in computer science from Arizona State
[26] F. Petroski Such, V. Madhavan, E. Conti, J. Lehman, K. O. Stanley, and University, in 2014. He is currently an Associate
J. Clune, ‘‘Deep neuroevolution: Genetic algorithms are a competitive Professor at the Department of Computer Science
alternative for training deep neural networks for reinforcement learning,’’ and Engineering, Michigan State University. His
2017, arXiv:1712.06567. research has been funded by the National Science
[27] H. Mühlenbein, ‘‘Parallel genetic algorithms in combinational opti- Foundation, the National Institutes of Health, and
mization,’’ in Computer Science and Operations Research, O. Balci, the Office of Naval Research, and published more
R. Sharda, and S. A. Zenios, Eds. Amsterdam, The Netherlands: Pergamon, than 100 peer-reviewed journals and conference
1992, pp. 441–453. [Online]. Available: http://www.sciencedirect.com/ papers in data mining and machine learning. His
science/article/pii/B9780080408064500344 research interests include large-scale machine learning, data mining, and
[28] M. Mitchell, An Introduction to Genetic Algorithms. Cambridge, MA,
biomedical informatics, with a focus on the transfer and multi-task learning.
USA: MIT Press, 1998.
[29] Q. Lu, K. Nguyen, and C. Huang, ‘‘Distributed parallel algorithms He was a recipient of the National Science Foundation CAREER Award
for online virtual network embedding applications,’’ Int. J. Commun. (2018). His papers received the Best Student Paper Award in 2014 IEEE
Syst., p. e4325, Jan. 2020. [Online]. Available: https://onlinelibrary.wiley. International Conference on Data Mining (ICDM), the Best Student Paper
com/doi/abs/10.1002/dac.4325 Award at the 2016 International Symposium on Biomedical Imaging (ISBI),
[30] H. Zhu and C. Huang, ‘‘VNF-B&B: Enabling edge-based NFV with CPE and the Best Paper Award at the 2016 IEEE International Conference on Big
resource sharing,’’ in Proc. IEEE 28th Annu. Int. Symp. Pers., Indoor, Data (BigData).
Mobile Radio Commun. (PIMRC), Oct. 2017, pp. 1–5.