0% found this document useful (0 votes)
4 views

EdgePV: Collaborative Edge Computing Framework for Task Offloading

ICC 2021 conference

Uploaded by

khoantd
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views

EdgePV: Collaborative Edge Computing Framework for Task Offloading

ICC 2021 conference

Uploaded by

khoantd
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

EdgePV: Collaborative Edge Computing

Framework for Task Offloading


Khoa Nguyen Steve Drew Changcheng Huang Jiayu Zhou
Carleton University BitQubic Corp. Carleton University Michigan State University
Ottawa, ON, Canada Kanata, ON, Canada Ottawa, ON, Canada East Lansing, MI, USA
[email protected] [email protected] [email protected] [email protected]

Abstract—Recent research has pointed out that almost


all vehicles spend over 95% of their times in parking lots
where their powerful computing resources are wasted. In
this paper, we propose a novel collaborative computing
paradigm that efficiently offloads online heterogeneous
computation tasks to parked vehicles during peak hours. Task 1
Core Network
Worker node

A container orchestration based on Kubernetes is


integrated into the infrastructure due to its cutting-edge
features such as auto-healing, load-balancing, and Master node

security. We formulate the offloading problem Kubernetes

Edge Servers

analytically and present an intelligent metaheuristic Task 2

algorithm to address dynamic online demands. Extensive


Base Station

evaluation demonstrates that our proposed paradigm


improves task acceptance ratio and average offloading Task 4

cost for more than 40% with high task arrival rate
compared with a set of existing algorithms. Worker node Worker node Worker node
wired link

Task 3 Parked vehicle Task

Index Terms—Parked Vehicles, Edge Computing, wireless link

Container Orchestration, Kubernetes.


I. INTRODUCTION Worker node Worker node Worker node

The number of vehicles has been dramatically increasing


in the last decade, that are predicted to attain two billions Fig. 1: Edge computing architecture integrated with
by 2035 [1]. Many of which are generally equipped with parked vehicles (PVs) enabled by Kubernetes.
powerful on-board computing hardware (e.g. CPU, GPU)
to provide modern advanced features such as auto-pilot, In America, for instance, the average daily driving time
intelligent radar, sensing safety systems. These on-board was merely 50.6 minutes according to the AAA Foundation
facility supporting level-four autonomous driving might survey for Traffic Safety in 2016 [3]. These statistics reveal
cost thousands of dollars but the resource utilization of that the powerful on-board hardware of vehicles is idle
these vehicles is extremely low during parking time. Recent for most of the time, giving a great chance to exploit
studies have indicated that there are 70% of individual these omitted computation resources for additional services.
vehicles spending almost 95% of their time for parking The neglected computational resources of PVs can be an
in parking lots, street parking and home garages [1] [2]. excellent candidate for Mobile Edge Computing (MEC)
where the conventional computation and storage services
usually offered by remote cloud are now migrated to the
network edge. With the advent of PVs, MEC capacity
can be enlarged. However, the collaborative framework
between cloud-edge and PVs complicates the task offloading
problems by determining appropriate network resources to
handle given tasks. Moreover, online heterogeneous tasks
can be classified into delay-sensitive (e.g. mobile gaming,
autopilot, video surveillance) and delay-insensitive with
stringent computing requirements (e.g. health monitoring,
vehicular sensing, location-based augmented reality games)
[4]. The explosive growth of data traffic with arbitrary
requirements, either computation intensive or sensitive-
delay tasks, will impose a heavy burden on the existing
infrastructure during peak hours. Incoming tasks that cannot
be processed at the cloud due to delay-sensitive requirements A scheduler in the master node conducts pod placement
can be offloaded to the edge network. Due to the limited (running container) across the set of available nodes. This
capacity at both remote cloud and edge during peak hours, proposed architecture improves the elasticity and agility of
networks can rapidly become congested when a number the existing infrastructure to cope with any possible service
of tasks increase. On the other hand, parking time of disruption caused by the mobility of PVs, which has not
PVs is uncertainty, that makes PV nodes unreliable to run been solved before. It is also envisaged that PVs could be
applications/services on them. Thus, a new network design completely electric-based in near future, and they would
is desired to solve these aforementioned problems. be automatically charging while parking. Moreover, D2D
Applying a generic container orchestration to edge technology can be possibly integrated into this collaborative
computing enhanced by PVs is still infancy. Container infrastructure, but it is beyond the scope of this research
orchestration (e.g. Kubernetes) enables PVs to efficiently since its main goal is to tackle the online task offloading
run several replicas of a task simultaneously since it allows problem. We divide the contents of this paper into the
fast bootup, auto-scaling, self-healing and rapid termination following sections. The related work will be presented in
time. These agile features are critical to solve the uncertaintySection II. Section III will formulate the problem. Section IV
problem of limited parking duration of PVs. Moreover, describes the proposed GA algorithm based on the problem
containerization requires lower hardware requirements, and formulation. Then the simulation results will be shown in
offers less operation costs and resource isolation when each Section VI. Section VII will conclude the paper.
container is independently running replicas of a given task. II. R ELATED W ORK
However, where the replicas of a task are offloaded to
PVs as infrastructure have recently received significant
a collaborative infrastructure meeting rigid resources and
attentions since they expand the existing computing infras-
reliability constraints while minimizing network costs is not
tructure for computation, communication and storage (CCS).
an easy task. For example, if all task replicas are allocated to
Enabling PVs for vehicular cloud computing in Internet of
a single node (e.g. PVs), but this node gets an unexpected
Vehicle has studied in [5]–[11].
failure (e.g. outage, sudden vehicular leaving). Services
Arif et al. in [5] studied the simple model of a vehicular
provided through running such containerized task will be
cloud (VC) formed by PVs in an international airport. [6]
interrupted. To guarantee the reliability, a least proportion
presented a multi-layered vehicular cloud architecture based
of replicas should be running on different worker nodes and
on cloud computing and Internet of Thing (IoT) technology.
in this paper we set this proportion less than 50%. This
Similarly, the recognition of a VC erected on PVs in a
proportion can be determined and adjusted easily by SPs
parking lot as a spatial-temporal network architecture for
depending on their network service strategies.
CCS was investigated in [7]–[9], [12]. Additionally, [10]
In this paper, we propose EdgePV, a novel collaborative concerned the feasibility of PVs as a computation paradigm,
architecture in which PVs expands the existing resource and introduced an incentive algorithm offering accumulating
capacity of Cloud-Edge infrastructure to handle online rewards for PVs by selling their resources. Moreover, Hou
containerized tasks during peak hours at edge. An incoming et al. [13] approached the vehicular fog computing (VFC)
task can be abstracted in a form of multiple replicas that exploited connected PVs as the infrastructure to handle
running on independent containers in a containerization realtime services at the edge. Similarly, [14] presented a
framework. The online task offloading problem of the fog computing architecture deployed in Internet of Vehicle
proposed collaborative framework is formulated as Binary (IoV) systems to provide computational resources to end
Integer Programming (BIP) with respect to minimizing users with latency guarantee. Recently, Parked Vehicle
offloading costs while maximizing accumulative rewards. Edge Computing (PVEC) where PVs as accessible edge
Efficiently scheduling the tasks, specifically the number of computing nodes to deal with task allocation has been
task replicas throughout the collaborative paradigm with proposed in [1], [3], [15]. The authors in [1] explored
respect to stringent constraints, has been still unsolved. possible opportunistic resources to handle computational
Hence, we propose Genetic Algorithm (GA), a mature tasks in a combined infrastructure between vehicle edge
metaheuristic algorithm, to solve the task offloading problem computing (VEC) servers and PVs. [3] introduced a dynamic-
meeting rigid task requirements (e.g. delay-sensitive) with pricing strategy in aim at minimizing average costs and
low costs and high reliability. Furthermore, owners of PVs meeting the QoS constraints. Additionally, a containerized
who are selling their idle resources can obtain accumulating task scheduling scheme enabled by PVEC was proposed
reward points (user utility) that are able to be converted to in [15] concerning the social welfare optimization for
parking tickets, gift-card, shopping vouchers, gas, and so both users and PVs. [11] proposed a scalable vehicle-
on. assisted MEC infrastructure that integrated the remote cloud,
As illustrated in Fig. 1, we suggest edge server im- MEC and mobile volunteer vehicles (buses) to process task
plements Kubernetes as a master node whereas remote requests from IoT devices. It may look similar to the idea
cloud, available computing resources of edge server and of this paper but our paper is aimed at solving the online
PVs can run as worker nodes. PVs are installed a light- task offloading problems in container-based computing
weight Kubernetes version (e.g. K3S), permitted as preamble framework (EdgePV) concerning the allocation of several
nodes in the network due to the uncertainty of their parking task replicas. Our proposed algorithm not only take the
time. The master node manage the state of the cluster, network costs, but also the accumulating rewards achieved
schedule the containers, accept or reject the task requests. by selling computational resources of PVs into account.
EdgePv involves PVs in parking lot which are more popular in data center, the delay caused by writing or accessing the
and reliable than buses due to their less mobility. data volumes from memory can be neglected. The cloud
III. P ROBLEM F ORMULATION offloading latency tc (lc ) including the transmission delay
In this section, we will formulate our model that considers and the processing delay can be computed as below:
resource constraints of the network edge as the orchestration χk χk fk d
tc (k) = max { j + c j j + + Th } ≤ tm (k) (3)
scheduler placed in edge server aside in 5G base station j∈ð(k) ξc RC (ni ) v
(BS). where ξc and fk denote the transmission rate of server
A. System Model and the number of CPU cycles utilized to calculate per bit
In this paper, we concern are CPU, memory and band- respectively. Thus, the total number of CPUs required to
width resources. Network edge consists of various types of calculate the task k can be expressed as χk fk . d, v and Th
worker nodes (cloud, edge and PVs) that are connected to are the distance between the core cloud and edge cloud,
the master node located in edge server via different links. the speed of light and the constant time of handling an
For example, cloud node connects to the master node via incoming task respectively. Edge devices transmit tasks to
optical link while PVs integrate into the edge via wireless the edge servers via wired or wireless links (base station) for
links in which available bandwidth of a vehicle is dependent processing. For simplification, theχdelay caused by handling
k
on the distance to BS. Thus, it can consider the network a task can be described as Th = . where ν is a discount
Be ν
edge with a star topology in which root and leaves are factor that reflects fluctuations of bandwidth at the edge
master node and worker nodes respectively. As illustrated (0 < ν < 1). Different from the remote cloud, when the task
in Fig. 1, a typical outdoor parking lot is investigated where is managed at the edge, the delay can be only caused by the
PVs are initially required to register their information (e.g. remaining computational capacity to process the task. The
owner’s ID, license plate, preferable parking availability) and edge offloading latency te (k) can be computed as below:
vehicle resources (e.g. computational capacity, storage) with χk fk
a SP. When PVs arrive a parking lot or complete tasks, they te (k) = max { e j j + Th } ≤ tm (k) (4)
j∈ð(k) RC (ni )
can send/update their state information to the SP or master
node. The edge network is modeled as a directed graph 2) PVs latency: Unlike the cloud/edge nodes that can
G = (N, L) where N is the set of worker nodes whereas L be considered as stable, PVs shall be considered as pre-
is the set of corresponding links. The edge server connects emptible nodes due to their uncertain mobility. Increasing
to the remote cloud via an optical link and PVs connects the number of replicas can be a possible approach to avoid
to the network edge via wireless links, denoted as lc and lv service disruption. By that solution, the master node might
respectively. Each worker node can initialize several pods have more time to migrate the current task to other nodes
to run containers processing task replicas simultaneously for QoS guarantee.
with QoS guarantee. A given task k has CPU c(k), Memory Similar to [11], we leverage LTE-A for wireless commu-
m(k), Bandwidth b(k), tolerable maximum latency tm (k) nication between the base station and PVs, and consider
and the set of replicas ð(k) requirements.
P kj denotes a j th the system applying orthogonal frequency-division multiple
replica of the task k ∈ K, then kj = ð(k). A worker access (OFDMA) scheme. Denote the parameter dbs,p is the
node ni ∈ N has its own resource capacity to operate a distance between the base station and pth PV. The path loss
limited number of containers. Denote C(ni ) and M (ni ) of the base station and parked vehicle can be characterized
as CPU and memory capacity that ith worker node can as d−σ bs,p and the white Gaussian noise power N0 , where σ
provide respectively. Let denote Kc , Ke , and Kp as the set factor is the path loss exponent. The corresponding wireless
of tasks offloaded to the cloud, edge and PVs respectively. channel is modeled as frequency-flat block-fading Rayleigh
The residual CPU and Memory capacity of a worker node fading channel that is denoted as h. Accordingly, the data
can be computed as below: rate capacity of the pth PV can be expressed as:
u
X X PT X .d−σ 2
bs,p .|h |
RC (ni ) = C(ni ) − c(kj ), ∀ni ∈ N (1) ξp = Bp log2 (1 + ) (5)
k∈K j∈ð(k) N0 + I
u
X X where the parameter Bp denotes the channel bandwidth.
RM (ni ) = M (ni ) − m(kj ), ∀ni ∈ N (2) P
T X and I represents the transmission power of base station
k∈K j∈ð(k)
and inter-cell interference respectively. The PVs offloading
where u denotes worker node type (Cloud: c, Edge: e, PVs: latency tp (k) can be computed as below:
p) χkj χk fk
tp (k) = max { + p j j + Th } ≤ tm (k) (6)
j∈ð(k) E[ξp ] RC (ni )
B. Channel Model
1) Core network offloading latency: Total network In this paper, we define two types of online tasks including
latency comprises data transmission time and task execution delay-sensitive and delay-insensitive akin to [4]. The former
time. The formal is highly correlated to the remaining is merely handled at the edge node or PVs due to their closest
bandwidth of the link lc , while the later depends on how proximity, while the later can be placed on any network
busy the cloud is to handle the offloaded tasks or to operate nodes (Cloud, Edge and PVs). Next, we compute the costs
other services. Large number of tasks offloaded to the cloud of mapping the task replicas into the worker nodes through
via lc and less residual resources increase network latency, the collaborative computation platforms. The mapping costs
especially in peak hours. Thanks to accelerating technologies involve sum of total number of CPUs required to compute
task replicas, memory, bandwidth and energy consumption as below:
(e.g. battery) for operating replicas at PVs since cloud $p = ϕp − ρEp (k) (20)
and edge computing platforms possess very high energy
efficiency. Offloading cost at the Cloud can be expressed as where ρ denotes a coefficient of energy price, and ϕp can
below: be expressed: X
X WC χk fk ϕp = σ rpc χkj fkj + rpm m(kj ) (21)
ΞCc (k) = P c j j0 (7)
Cc − k0 ∈Kc c(k ) + δ j∈ðk
j∈ð(k)
where rpc and rpmare the unit prices for offering CPU
W m(kj )
P Mc
X
ΞMc (k) = (8) and memory resources respectively. It is recognized that
Mc − k0 ∈Kc m(k 0 ) + δ minimizing the cost embedding tasks would increase the
j∈ð(k)
economical benefits gained by accepting to process the
χ(k )
X WBc tm (kjj ) requested tasks in PVs. Variables:
ΞBc (k) = P χ(k0 )
(9) (
j∈ð(k) Bc − k0 ∈Kc tm (k0 ) +δ 1, kj deployed on cloud, ∀j ∈ ð(k).
Ackj =
where δ is small positive number to prevent dividing by 0, otherwise.
(
zero. Total cost of offloading a task to the cloud: 1, kj deployed on edge, ∀j ∈ ð(k).
Ξck = ΞCc (k) + ΞMc (k) + ΞBc (k) (10) Aekj =
0, otherwise.
(
When the given task is processed at the edge, it can be 1, kj deployed on PVs, ∀j ∈ ð(k).
considered as local processing, so the offloading cost at the Apkj =
0, otherwise.
Edge is computed as below:
X WC χk fk Objective:
ΞCe (k) = P e j j (11) X X 1
Ce − k0 ∈Ke c(k 0 ) + δ M inimize Ξck Ackj +Ξek Aekj +(ηΞpk +(1−η) )Ap
j∈ð(k) ϕpk kj
k∈K j∈ðk
W m(k) (22)
P Me
X
ΞMe (k) = (12)
j∈ð(k)
Me − k0 ∈Ke m(k 0 ) + δ w.r.t Ackj , Aekj , Apkj

Total cost of offloading a task to the edge: Constraints: X p


Ackj + Aekj + Akj = 1, j ∈ ð(k), ∀k ∈ K (23)
Ξek = ΞCe (k) + ΞMe (k) (13) p∈N

Similarly, offloading cost of a task to PVs can be computed X ð(k)


with additional energy consumption attribute as below: 1≤ Apkj ≤ , ∀k ∈ K (24)
2
j∈ð(k)
X WC χk fk
ΞCp (k) = P p j j (14) X
Cp − k0 ∈Kp c(k 0 ) + δ kj = ð(k), ∀k ∈ K (25)
j∈ð(k)
c e p
X WM m(k) RC (ni ), RC (ni ), RC (ni ) ≥ c(k), ni ∈ N (26)
ΞMp (k) = P p (15)
Mp − k0 ∈Kp m(k 0 ) + δ c
RM e
(ni ), RC p
(ni ), RM (ni ) ≥ m(k), ni ∈ N (27)
j∈ð(k)

χkj c
RB e
(ni ), RB (ni ), ξni ≥ b(k), ni ∈ N (28)
ΞBp (k) = WBp , ∀k ∈ Kp (16)
tm (k)ξp ;
X tc , te , tp ≤ tm (k) (29)
Ep (k) = χkj fkj ep (17)
Remarks:
j∈ð(k)
• Function (22) includes dual objectives: minimizing the
where ep is a coefficient, that can be achieved by: cost of offloading computation tasks and maximizing
p
ep = (RC (ni ))2 (18) the PV rewards when the tasks are offloaded to PVs
where η is a damping factor within (0,1).
where  denotes the energy coefficient. • Constraint (23) ensures that each task replica is only
Total cost of offloading a task k to a PV: deployed at one worker node.
• Constraint (24) guarantees that no more than 50% the
Ξpk = ΞCp (k) + ΞMp (k) + ΞBp (k) + ςEp (k); (19)
number of task replicas can be placed at the same PV
where ς is energy cost coefficient. node.
• Constraint (31) makes sure that total number of replicas
3) PVs’ Utility: To encourage PVs to sell their idle
resources while parking in the parking lots, owners of PVs scheduled are at least equal to the required replicas of
should receive the rewards when they accept to process the corresponding task.
• Constraints (26),(27), and (28) assure that the residual
tasks on their vehicles. Let ϕp represent the revenues by
accepting the tasks and the utility of a PV can be calculated resources of the worker nodes (e.g. Cloud, Edge, PVs)
satisfies the capacity requirements of the task.
• Constraints (29) ensures the selected nodes satisfy the When a chromosome is established by N potential genes
latency constraints. that have qualified the feasibility process, such is defined
IV. O UR P ROPOSED G ENETIC A LGORITHM as a feasible solution for a task request.
Fitness Function: Our objective is to minimize the
A. Descriptions of Genetic Algorithm scheduling costs and maximize the network utility when
GA algorithm is a mature metaheuristic that is motivated replicas of a given task are offloaded to PVs. Fitness values
by the Darwin evolution principle through natural selection, are utilized to evaluate the quality of a scheduling solution,
including four major operations: initialization, selection, so higher fitness value represents a good solution.
crossover and mutation 1 . To solve BIP problem, we present X 1 1 1
a distributed parallel GA-based algorithm that operates on a F(k) = c Ackj + e Aekj + ((1 − η) p + ηϕpk )Apkj
Ξ Ξk Ξk
predefined number of independently machines, denoted as p, j∈ðk k
to explore feasible solutions widely known as chromosomes. (31)
The design of our proposed parallel algorithm is depicted in New generations: In this paper, we randomly select
Fig 2 in which p is set to 16. As illustrated, the offloading chromosomes to become parents to generate new population.
procedures are successively working under a master node New chromosomes are intentionally generated to produce
(e.g. synchronization). Several working nodes run a GA new generation as a result of crossover and mutation
algorithm to discover as many feasible solutions as possible operations. Aimed at improving the diversity, the population
for replica scheduling. The best solutions from the worker updates these new generated generations so it is consequently
nodes are synchronized to select the final solution for task evolved, increasing the possibility to achieve near-optimal
offloading. Our proposed algorithm in this study assumes to scheduling solution.
schedule multiple task replicas at once instead of sequentially B. Terminations and Synchronization
mapping. A parallel operation is typically consisted of concurrent
Start
processes, and each accomplishes its job at different time.
A set of replicas of task k
Waiting until all tasks finish their assignments is frustrated
0 1
as one or more tasks might take longer time to be done (e.g.
v
Population Population Population
Initialization Initialization deadlock). Thus, if there is no better solution to be found in
Initialization

Selection Selection
t times; where t is denoted as a termination parameter, the
Selection

Crossover Crossover master procedure terminates worker nodes to reduce total


Crossover

execution time. Moreover, the feasible solutions received


......
Mutation Mutation
from the worker nodes are synchronized to determine the
Mutation

Sorting Sorting best scheduling solution for the corresponding task request,
Sorting

Termination Termination
based on fitness function values. If accepted, the given task
Termination

replicas are consequently placed onto the corresponding


Synchronization nodes in the network following the scheduling solution found.
Termination
Substrate network then updates its residual computational
resources to end the scheduling procedures.
Scheduling

Finish
V. C OMPARED A LGORITHMS
We propose a metaheuristic algorithm that minimizes the
Fig. 2: Parallel operation scheme embedding cost while improves the PVs utility efficiently by
Chromosome: A chromosome Cf represents a scheduling selecting proper worker nodes to run the task replicas. All
solution of a set of replicas of a given task request that are proposed algorithms allows to schedule multiple requested
selected from the available nodes in random. Each gene task replicas into the same worker node, but the proportion
in a chromosome involves a mapping solution for a single of the number of replicas placed in this node cannot exceed
task replica. If there are G genes and M chromosomes, 50% (except Cloud/Edge nodes) in order to ensure the
the initial population P (M xG size) at the k th working service reliability. This setting can be flexibly changed
machine can be delineated as below: by SP. We investigate the scheduling efficiency of our
proposed GA-based algorithm by comparing with some
g1 · · · g1j · · · g1G
   1 
C1 heuristic algorithms including: Baseline 1, Baseline 2, and
 C2  
  g21 · · · g2j · · · g2G   Baseline 3. Baseline 1 prefers Kubernetes default scheduler
 ..   .. .. .. 
  . . . . with filtering and scoring procedures while Baseline 2
 .   . . . . . 

P=  Cf  = 

1 j G  (30) schedules online tasks by selecting worker nodes for task
   gf · · · gf · · · gf  offloading in random. Furthermore, Baseline 3 deploys
 .   . .. .. .. .. 

 ..    .. . . . .  branch and bound selection policy to handle incoming tasks
CM gM1 j
· · · gM G
· · · gM [17]. There are basically three performance metrics for
performance evaluation including task acceptance ratios
1
Due to page limitation, more details about GA algorithm can (A/R), costs and accumulated utility.
be found in [16]
TABLE I: Simulation Parameter Settings
1.0 Cloud
Edge
Parameter Values
Cloud-Edge
Cloud-Edge-PVs
Maximum parking capacity 50
0.8
Total simulation time 30,000 seconds
Vehicle lifetime [480-14400] seconds
Acceptance ratio

0.6
C c / M c / Bc 50GHz/1000MB/1Gbps
0.4
C e / Me 20GHz / 500MB
W{Cc ,Mc ,Bc } 750
0.2
W{Ce ,Me } 250
W{Cp ,Mp ,Bp } 10
0.0 Channel Bandwidth Bp 10 MHz
1.3W / 3 × 10−13 / 2
20 40 60 80 100 120
Task arrival rate PT X / N0 / σ
CPU Parked Vehicles Cp [1.5-2.0] GHz
Fig. 3: Acceptance ratio Input data size χk [100 - 300] kb
CPU cycles per bit fk 1000 cycles
200
Cloud Memory requests m(k) [20-50] MB
Edge
175
Cloud-Edge Tolerable latency of tasks tm (0-100] ms
150
Cloud-Edge-PVs
Arrival request rates A/R [10-120]
Request replications ð(k) [2 - 10]
Average cost

125
rpc / rpm 10 / 100
100

75
40% compared to Cloud-Edge and Cloud architectures at the
task arrival rate of 120, respectively. Cloud or edge itself gets
50
lowest acceptance ratio due to their limited resource capacity
25 during peak hours. By preferring PVs for task offloading,
20 40 60 80 100 120 Cloud-Edge-PVs achieved the lowest average cost values
Task arrival rate
compared to all compared platforms as illustrated in Fig. 4.
Fig. 4: Average costs between architectures In performance evaluation between proposed algorithms as
illustrated in Fig. 5, Baseline 1 performed worst in terms of
VI. N UMERICAL R ESULTS average costs due to its allocating strategies through filtering
and scoring procedures while Baseline 2 based on a random
A. Simulation setup strategy to select the worker nodes performed better than
We have developed a discrete event simulator to evaluate Baseline 1. Amongst baseline algorithms, Baseline 3 was
the proposed algorithms. PVs dynamically arrive and depart originally designed to target reducing the offloading cost so
from a parking lot with 50 free parking spots. In fact, the its performance was comparative to our proposed algorithm,
parking lot can fully be occupied, but we assume that the but EdgeGA was still better than Baseline 3 until arrival rate
parking capacity remains at least 50% up to 85% in peak of 80 and seemed to be lightly the same afterward as shown
hours since not all of PVs are willing to sell their resources or in Fig. 5a. Online heterogeneous tasks were mostlikely
are qualified to join into the network to provide the services. to be assigned to PVs which expectedly produce lower
Furthermore, it is observed that parking duration of PVs is offloading costs. In utility metric, EdgeGA outperformed all
ranging from 08 to 240 minutes [1] or 30 to 120 minutes compared algorithms following Baseline 2,Baseline 3 and
[12]. More than 85% of PVs spend maximum three hours in Baseline 1 respectively as depicted in Fig. 5b. The reason is
a parking lot and serviceability probability of PVs achieves that EdgeGA simultaneously took both cost and utility into
around 90% at 60 minutes [1]. In this paper, the accumulative account driven by an efficient fitness function (31). Besides,
parking duration of PVs is following Poisson distribution we evaluated the availability of PVs regarding the acceptance
with λ = 3600. As discussed in previous sections, the ratios on several arrival rates as depicted in Figure 5c. It has
online requested tasks can be classified into delay-sensitive been demonstrated that depending on the selected arrival
and delay-insensitive tasks. If the delay tolerance of a task rates, each had different preferable PV availability. For
exceeds 20 ms, we considered it as a delay-sensitive demand. example, arrival rates (10, 20, 30, 40, 50) required 60%
Our simulation run approximately for 8 hours (peak business availability of PVs to exceed 80% acceptance ratios while
working time) and the simulator will update the PVs every arrival rates of 60 and 80 needed to reach 80% and 100%
20 minutes. Additionally, energy coefficient , coefficient to obtain the same result, respectively. These information
for energy price ρ and unit price for each CPU cycle σ are is vital for the network planners to achieve desired Key
set to 10−24 , 0.003 and 2 × 10−9 [12], respectively. Details Performance Indicators (KPIs) by adopting appropriate
of simulation parameters can be found in Table I. strategies. For instance, SP may increase user incentives to
B. Performance Results appeal more PVs join into the network, offload to another
As illustrated in Fig. 3, Cloud-Edge-PVs paradigm cluster or expand edge server capacity. Furthermore, our
extends the resource availability of the network that increases proposed GA-based algorithm successfully processed a
the possibility of accepting the incoming requests more than given task in average 1.217ms compared to 14.725ms
200
Baseline_1 Baseline_1 1.0
140
Baseline_2 Baseline_2
180
Baseline_3 Baseline_3
120 EdgeGA EdgeGA
160 0.8

Acceptance ratio
100 140

Average utility
Average cost

0.6
80 120

100 ArrivalRate=10
60 0.4 ArrivalRate=20
ArrivalRate=30
80
ArrivalRate=40
40
ArrivalRate=50
60 ArrivalRate=60
0.2
ArrivalRate=80
20
40
20 40 60 80 100 120 20 40 60 80 100 120 0 20 40 60 80 100
Task arrival rate Task arrival rate Availability (%)

(a) Average offloading cost (b) Average utility (c) A/R towards PV availability
Fig. 5: Performance evaluation between compared algorithms

in sequential GA operation. This superior execution-time [6] W. He, G. Yan, and L. D. Xu, “Developing vehicular data
performance attained by deploying the parallel scheme for cloud services in the iot environment,” IEEE Transactions on
GA algorithm as proposed in Fig. 2. This execution time is Industrial Informatics, vol. 10, no. 2, pp. 1587–1595, 2014.
very competitive, and it somehow lifts the curse of possibly [7] F. Dressler, P. Handle, and C. Sommer, “Towards a vehicular
cloud - using parked vehicles as a temporary network and
high computation time when running GA algorithms. storage infrastructure,” in Proceedings of the 2014 ACM
ACKNOWLEDGEMENT International Workshop on Wireless and Mobile Technologies
This research was supported by the Natural Sciences and for Smart Cities, ser. WiMobCity ’14. New York, NY,
Engineering Research Council of Canada (NSERC) Engage USA: Association for Computing Machinery, 2014, p. 11–18.
[Online]. Available: https://doi.org/10.1145/2633661.2633671
grant (EGP 543449-19).
[8] E. Al-Rashed, M. Al-Rousan, and N. Al-Ibrahim,
VII. C ONCLUSION “Performance evaluation of wide-spread assignment schemes
In this paper, we have studied the collaborative framework in a vehicular cloud,” Vehicular Communications, vol. 9,
where PVs are potentially considered as an extension for the pp. 144 – 153, 2017. [Online]. Available: http://www.
existing cloud-edge computing infrastructure to handle on- sciencedirect.com/science/article/pii/S2214209616301863
line container-based task offloading in peak hours. We have [9] T. Kim, H. Min, and J. Jung, “Vehicular datacenter modeling
devised Kubernetes, a container orchestrator, deployed at for cloud computing: Considering capacity and leave rate
of vehicles,” Future Generation Computer Systems, vol. 88,
edge servers as master nodes, while remote cloud, edge itself
pp. 363 – 372, 2018. [Online]. Available: http://www.
and PVs perform as worker nodes. Extensive experiments sciencedirect.com/science/article/pii/S0167739X18300487
demonstrated that our proposed collaborative paradigm [10] C. Li, S. Wang, X. Huang, X. Li, R. Yu, and F. Zhao, “Parked
not only extends the computation resources of existing vehicular computing for energy-efficient internet of vehicles:
network infrastructure by taking advantage of high on-board A contract theoretic approach,” IEEE Internet of Things
computers of PVs efficiently, but also brings a flexible, Journal, vol. 6, no. 4, pp. 6079–6088, 2019.
agile and reliable framework for task offloading problems. [11] S. Raza, W. Liu, M. Ahmed, M. R. Anwar, M. A. Mirza,
Q. Sun, and S. Wang, “An efficient task offloading scheme
In future work, we consider sophisticated algorithms (e.g.
in vehicular edge computing,” Journal of Cloud Computing,
machine learning techniques) for task offloading problem. vol. 9, no. 1, p. 28, Jun 2020. [Online]. Available:
R EFERENCES https://doi.org/10.1186/s13677-020-00175-w
[1] X. Huang, R. Yu, J. Liu, and L. Shu, “Parked vehicle edge [12] Y. Cao, Y. Teng, F. R. Yu, V. C. M. Leung, Z. Song,
computing: Exploiting opportunistic resources for distributed and M. Song, “Delay sensitive large-scale parked vehicu-
mobile applications,” IEEE Access, vol. 6, pp. 66 649–66 663, lar computing via software defined blockchain,” in 2020
2018. IEEE Wireless Communications and Networking Conference
[2] F. H. Rahman, A. Yura Muhammad Iqbal, S. H. S. Newaz, (WCNC), May 2020, pp. 1–6.
A. Thien Wan, and M. S. Ahsan, “Street parked vehicles [13] X. Hou, Y. Li, M. Chen, D. Wu, D. Jin, and S. Chen,
based vehicular fog computing: Tcp throughput evaluation “Vehicular fog computing: A viewpoint of vehicles as the
and future research direction,” in 2019 21st International Con- infrastructures,” IEEE Transactions on Vehicular Technology,
ference on Advanced Communication Technology (ICACT), vol. 65, no. 6, pp. 3860–3873, 2016.
2019, pp. 26–31. [14] X. Wang, Z. Ning, and L. Wang, “Offloading in internet of
[3] D. Han, W. Chen, and Y. Fang, “A dynamic pricing strategy vehicles: A fog-enabled real-time traffic management system,”
for vehicle assisted mobile edge computing systems,” IEEE IEEE Transactions on Industrial Informatics, vol. 14, no. 10,
Wireless Communications Letters, vol. 8, no. 2, pp. 420–423, pp. 4568–4578, 2018.
2019. [15] X. Huang, P. Li, and R. Yu, “Social welfare maximization
[4] O. Fadahunsi and M. Maheswaran, “Locality sensitive in container-based task scheduling for parked vehicle edge
request distribution for fog and cloud servers,” Service computing,” IEEE Communications Letters, vol. 23, no. 8,
Oriented Computing and Applications, vol. 13, no. 2, pp. 1347–1351, 2019.
pp. 127–140, Jun 2019. [Online]. Available: https: [16] M. Mitchell, An Introduction to Genetic Algorithms. Cam-
//doi.org/10.1007/s11761-019-00260-2 bridge, MA, USA: MIT Press, 1998.
[5] S. Arif, S. Olariu, J. Wang, G. Yan, W. Yang, and I. Khalil, [17] H. Zhu and C. Huang, “VNF-B&B: Enabling edge-based
“Datacenter at the airport: Reasoning about time-dependent NFV with CPE resource sharing,” in 2017 IEEE 28th Annual
parking lot occupancy,” IEEE Transactions on Parallel and International Symposium on Personal, Indoor, and Mobile
Distributed Systems, vol. 23, no. 11, pp. 2067–2080, 2012. Radio Communications (PIMRC), 2017, pp. 1–5.

You might also like