0% found this document useful (0 votes)
126 views9 pages

Latency-Driven Parallel Task Data Offloading in Fog Computing Networks For Industrial Applications

This article discusses latency-driven parallel task data offloading in fog computing networks for industrial applications. Fog computing brings cloud computing capabilities to the edge of networks to help support latency-sensitive industrial IoT applications. However, limited resources at fog nodes mean tasks are sometimes offloaded to neighboring fog nodes or the remote cloud for additional resources, which can increase latency. The article formulates an optimization problem to jointly determine how much of tasks to offload to neighboring fog nodes versus the remote cloud while meeting latency deadlines, considering transmission delays and service rates. It solves the problem using semidefinite relaxation to demonstrate effectiveness under different simulation settings.

Uploaded by

hazwan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
126 views9 pages

Latency-Driven Parallel Task Data Offloading in Fog Computing Networks For Industrial Applications

This article discusses latency-driven parallel task data offloading in fog computing networks for industrial applications. Fog computing brings cloud computing capabilities to the edge of networks to help support latency-sensitive industrial IoT applications. However, limited resources at fog nodes mean tasks are sometimes offloaded to neighboring fog nodes or the remote cloud for additional resources, which can increase latency. The article formulates an optimization problem to jointly determine how much of tasks to offload to neighboring fog nodes versus the remote cloud while meeting latency deadlines, considering transmission delays and service rates. It solves the problem using semidefinite relaxation to demonstrate effectiveness under different simulation settings.

Uploaded by

hazwan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

This article has been accepted for publication in a future issue of this journal, but has not been

fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2019.2957129, IEEE
Transactions on Industrial Informatics
1

Latency-driven Parallel Task Data Offloading in Fog


Computing Networks for Industrial Applications
Mithun Mukherjee, Member, IEEE, Suman Kumar, Constandinos X. Mavromoustakis Senior Member, IEEE,
George Mastorakis Member IEEE, Rakesh Matam Member, IEEE, Vikas Kumar Member, IEEE, and
Qi Zhang Member, IEEE

Abstract—Fog computing leverages the computational re- latency, network congestion and communication costs arise
sources at the network edge to meet the increasing demand with the physical distance between the data sources and
for latency-sensitive applications in large-scale industries. In this the remote cloud. To address these issues, fog computing
paper, we study the computation offloading in a fog computing
network where the end-users, most of the time, offload part of paradigm [1] extends the facilities offered by the cloud to
their tasks to a fog node. Nevertheless, limited by the computa- the edge of the network. That is, fog computing brings part
tional and storage resources, the fog node further simultaneously of the cloud functionality to the edge of the network, and
offloads the task data to the neighboring fog nodes and/or thereby supports geographically distributed, latency-sensitive,
the remote cloud server to obtain the additional computing and Quality-of-Service (QoS)-demanded IoT applications [1]–
resources. However, meanwhile, the offloaded tasks from the
neighboring node incur burden to the fog node. Moreover, the [3]. Fog computing acts as an intermediate layer of storage and
task offloading to the remote cloud server can suffer from limited computing facility between the IoT devices and cloud and, in
communication resources. Thus, to jointly optimize the amount of turn, reduces the need to access the cloud frequently. By doing
tasks offloaded to the neighboring fog nodes and communication so, fog computing significantly lowers the end-to-end delay,
resource allocation for the offloaded tasks to the remote cloud, communication cost, and congestion and thereby improves the
we formulate a latency-driven task data offloading problem
considering the transmission delay from fog to the cloud and overall performance of the IoT system. The fog computing
service rate that includes the local processing time and waiting layer is usually comprised of network devices such as edge
time at each fog node. The optimization problem is formulated routers, gateways, and access points which run on different
as a Quadratically Constraint Quadratic Programming (QCQP). software and so on, therefore, it is very challenging to design
We solve the problem by semidefinite relaxation. The simulation protocols that enable different fog nodes to collaborate with
results demonstrate that the proposed strategy is effective and
scalable under various simulation settings. each others [4]–[6].

A. Motivation
I. I NTRODUCTION
In a typical fog computing system, the resource-constraint
With an ever-increasing number of Internet of Things (IoT) end-users can offload the data of computation tasks to some
devices, managing the data generated by them is a real fog nodes in the vicinity. However, due to the computational
challenge. For example, massive IoT devices in smart factories and storage resource constraint within a fog node, the fog
continuously generate sensor data that needs to be transmitted, node often seeks resources from the cloud data center (ver-
stored and processed for effective monitoring and control. tical collaboration) and/or the other fog nodes (horizontal
Data centers offered by the cloud address this problem to collaboration). Extra latency and energy consumption could be
a significant extent. Nonetheless, several drawbacks, such as introduced in both offloading cases. To elaborate, for vertical
collaboration, where the fog node tries to offload the task
This work was supported by Guangdong science and technology innovation
strategy Grant No. 2018KJ011. (Corresponding author: Mithun Mukherjee.) data to the remote cloud, the limited capacity of the uplink
M. Mukherjee is with the Guangdong Provincial Key Laboratory of Petro- results in a further delay for completing the task. Similarly,
chemical Equipment Fault Diagnosis, Guangdong University of Petrochemical the lack of sufficient computation and storage resources in a
Technology, Maoming 525000, China, e-mail: [email protected]
S. Kumar is with the Department of Mathematics, IGNTU, Amarkantak fog node becomes an issue in horizontal collaboration with
484886, India, e-mail: [email protected] neighboring fog nodes, although the transmission latency is
C. X. Mavromoustakis is with the Mobile Systems Laboratory (MoSys less compared to the fog-to-cloud scenario. Therefore, the
Lab), Department of Computer Science, University of Nicosia, 1700 Nicosia,
Cyprus, e-mail: [email protected] critical yet unsolved challenge is to select the offloading
G. Mastorakis is with the Department of Management Science and Tech- location, i.e., the neighboring fog node or the remote cloud,
nology, Hellenic Mediterranean University, 72100 Crete, Greece, e-mail: and split the task while guaranteeing end-user’s delay deadline
[email protected]
R. Matam is with the Department of Computer Science and Engineering, with varying network traffic.
Indian Institute of Information Technology Guwahati, Guwahati 781015,
India, e-mail: [email protected]
V. Kumar was with Electrical Engineering Department, Indian Institute of B. Related Work
Technology, Patna, India, 801103, now he is with Bharat Sanchar Nigam Recently, several works focused on computation offloading
Limited, Patna, Bihar, India, 800001, e-mail: [email protected]
Q. Zhang is with the DIGIT, Department of Engineering, Aarhus University, in fog-edge-cloud computing scenarios [7]–[9] and mobile
8000 Aarhus, Denmark, e-mail: [email protected] edge computing [10], [11]. In literature, the authors mainly

1551-3203 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2019.2957129, IEEE
Transactions on Industrial Informatics
2

considered joint optimization of decision variables and com-


putational and communication resource allocation [12]. While
Cloud
offloading, a dual connectivity was assumed for end-users
in [9]. More specifically, they assume one connection is linked
with fog node and the other is the higher computing resource
units in the base station. For example, under a different
nomenclature, called computing access points, Chen et al. Offload
suggested an optimal solution where to process the tasks
(either in fog computing node or in remote cloud server) with Offload
a system model consisting of i) a single user with one task
in [13], ii) a single user with multiple tasks in [14], and iii)
multiple users with multiple tasks in [5]. Nevertheless, these Fog node Fog node
works [5], [13], [14] mainly considered a single fog node case. Upload
If the fog node cannot complete the tasks under delay and
energy constraints, then the it simply offloads the tasks to the
remote cloud. A scenario with multiple fog nodes and the End-user End-user
cloud server is consider in [15] to achieve minimal delay.
In a fog computing offloading scenario [16], [17], the ideal Fig. 1. System model.
solution would be if a single fog node can compute the entire
offloaded task data. However, the fog node may need either
horizontal/vertical collaboration or both. There exist a few to the optimization problem and solve the separable semi-
works that investigate horizontal collaboration with neighbour definite programming (SDP) problem in polynomial time.
fog nodes [18]–[20] and vertical collaboration with the remote Finally, we show that the proposed solution can achieve
cloud server [5], [13], [14]. Recently, authors in [4] discussed latency-deadlines compared to other stand-alone schemes.
a scenario where both horizontal and vertical collaboration are The rest of the paper is organized as follows. The system
considered for computational resource allocation to minimize model is presented in Section II. The delay model is discussed
the time to complete the task processing. However, in their in Section III. The optimization problem is formulated and
analysis they did not pay attention to the waiting delay due SDR is applied to the above mentioned problem in Section IV.
to the queue in the fog node. Moreover, the multi-user and Simulation results are provided in Section V. Finally, conclu-
different delay requirement for each users play important role sions are drawn in Section VI.
for the average delay minimisation and computational and
communication resource allocation. The transmission delay
between fog nodes was considered in [18], [19], however II. S YSTEM M ODEL
the multi-user case was not explicitly discussed. In addition, Consider a fog computing scenario with one cloud server, N
the multi-user case was studied in [6], [21] to optimize fog nodes, and K end-users, as illustrated in Fig. 1. The sets
the decision on computation offloading. Nevertheless, it is of end-users and fog nodes are denoted as K = {1, 2, . . . , K}
important to take into account the queuing delay [22] for the and N = {1, 2, . . . , N }, respectively. We assume that the end-
task processing in the fog node that receives the tasks from users and fog nodes are uniformly and randomly distributed
both the end-users and other fog nodes. However, none of these over the network.
above work [4], [6], [19] considered the queuing delay at the we take an application scenario in a large-scale industry
fog node, particularly in the horizontal fog node collaboration. where the industrial robots acquire and process the data
(e.g, the state-information) to assist delay-sensitive automa-
C. Main Contributions tion applications. In our system model, we consider a finite
In this paper, we aim to reduce the overall latency for each application set (such as, factory automation, manufacturing
task in the scenario in which each end-user can offload task to and production process, and fault detection) denoted as A =
the nearby fog node. More specifically, the major contributions {1, 2, . . . , A}. However, note that each type of application can
of this paper are highlighted as follows. require different CPU cycles to process the task data. It is
• We consider the transmission delay from end-users to the further assumed that end user can initialise only one task at a
fog node and the fog node to the remote cloud. In addition time. The main notations are summarized in Table I.
to the local computation time, we take into account the
waiting time due to queuing in the fog node. This will
lead to a challenging issue when a fog node receives the A. Task at the End-user
task data directly from the end-users in proximity as well We denote ∆t (in s) as the length of the each time slot
as neighboring fog node. Moreover, we impose different in our time-slotted system. It is further assumed that the task
deadline for the end-user’s task completion time. arrival rate at the end-user, i.e., λk , follows a Poisson process.
• We formulate the above optimisation problem in Quadrat- The task data that arrives at the beginning time slot t can be
ically Constraint Quadratic Programming (QCQP) prob- processed during the time interval [t, t + 1). We remove t in
lem. Afterward, we apply semi-definite relaxation (SDR) the rest of the paper for the sake of simplicity.

1551-3203 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2019.2957129, IEEE
Transactions on Industrial Informatics
3

TABLE I Tasks directly offloaded Offloaded to


S UMMARY OF N OTATIONS from the end-users λCPU
fog,i [tasks/s] the cloud
Symbols Definition Mi
!
λOL
k [tasks/s]
N The number of fog nodes
k=1 Local CPU
K The number of end-users γi,c
Mi The number of end-users under the ith fog node λOL
fog,i [tasks/s]
Ji
!
Ji
The number of neighboring fog nodes that may select the βj,i λOL
fog,j [tasks/s]
ith fog node as their secondary fog node βi,j
j=1
Offloading queue Offloaded to the
D The data size of a task Tasks offloaded from
neighbouring fog node
neighboring fog node
Lk,a The processing density for the task-a of the kth end-user
The computation capability (CPU cycles per second) of the
fk Fig. 2. Illustration of the task data distribution.
kth end-user
The computation capability (CPU cycles per second) of the
fi
ith fog node
The computation capability (CPU cycles per second) of the the remote cloud server when the primary fog node estimates
fc
cloud that the available computational resources of the fog nodes
λk The task arrival rate at the kth end-user (including primary and secondary fog node) are insufficient for
λCPU
k The task arrival rate at the local CPU of the kth end-user the computation of task. Fig. 2 illustrates the task execution
λOL
k The task arrival rate at the kth end-user’s offloading queue (offloading and local processing) at the ith primary fog node.
λfog,i The task arrival rate at the ith fog node Basically, λfog,i = λCPU OL
fog,i + λfog,i .
λCPU
fog,i The task arrival rate at the local CPU of the ith fog node Denote βi,j as the offloading probability from the ith fog
λOL
fog,i The task arrival rate at the ith fog node’s offloading queue node to the jth fog node. Therefore, the task arrival rate from
µk The service rate at the kth end-user’s CPU the ith fog node to the jth fog node is expressed as λOL fog,i,j =
µfog,i The service rate at the ith fog node’s CPU βi,j λOL
fog,i . Again, the task arrival rate from the ith fog node
OL OL
rk,i The uploading rate from the kth end-user to the ith fog node to the cloud becomes λfog,i,c = γi,c λfog,i , where γi,c is the
ri,c The uploading rate from the ith fog node to the cloud task offloading probability P from the ith fog node to the cloud.
N
Thus, we obtain λOL fog,i =
OL OL
j=1 λfog,i,j + λfog,i,c , j 6= i and
j ∈/ Ji , where Ji = {1, 2, . . . , Ji }, Ji ∈ N , is the set of
At first, the end-user prefers to compute the task data neighboring fog nodes. Thus, the task arrival rate at the ith
utilizing its own resources. However, when the total required fog node is
computation cycles are high, the end-users may not compute
Mi
X Ji
X
the task within the specified deadline due to resource limita-
λfog,i = λOL
k + βj,i λfog,j . (1)
tions (such as, CPU cycles and energy consumption1 ). Thus,
k=1 j=1
the end-users upload a fraction of the total tasks to the nearby
fog node, termed as primary) fog node. Assume that end-user’s To avoid the significant amount of task offloading delay
task scheduler maintains two disjoint task queues: a) one is (mainly transmission delay to offload the tasks) and task data
for local computation and b) the other is used for the task flooding over the entire network, we take the assumption that
offloading. We further assume that the end-user computes and during this offloading process, a fog node is not eligible to
offloads the task simultaneously. further offload the task received from its neighboring fog
Let αk,i be the task offloading probability for the kth end- nodes.
user to the ith primary fog node. Therefore, the task arrival
rate at the kth end-user’s offloading queue is λOL k = αk,i λk . III. D ELAY M ODEL : AVERAGE R ESPONSE AND
Subsequently, the remaining tasks are locally executed at the T RANSMISSION D ELAY
end-user side. The arrival rate of the tasks at the kth end-user’s
In this work, we mainly take into account a) average
queue for local processing is λCPU
k = (1 − αk,i ) λk .
response delay, including local task runtime and queueing
delay and b) transmission time to offload. We will investigate
B. Task at Fog Node the task prefetching, resource allocation delay, and erroneous
PN attempts in the delay model as an interesting future direction.
Denote Mi = {1, 2, . . . , Mi , . . . , MN }, i=1 Mi = K
as the set of end-users that can offload their computation to
the ith fog node. It is further assumed that an end-user can A. Average Response Delay
offload its task to only one fog node, thus Mi ∩Mi0 ≡ Ø for
The processing density of a task and the CPU clock speed
i 6= i0 . The computing and storage resource is higher in fog
mainly affect the service time (often called as the local
node compared to the end-user, however, it is often observed
task execution delay). Therefore, the service time at local
that the entire offloaded tasks from all the end-users cannot
processing queue of the kth end-user is given by
be computed within their deadline. Thus, the fog node seeks
resources from neighboring fog nodes within its proximity. TCPU
k = Lk,a D/fk . (2)
In several cases, the fog node also offloads the task data to
We assume M/M/1 queue model at the the end-user’s local
1 While novel, energy consumption is a part of future research. processing queue with a mean task arrival rate λk . Thus, the

1551-3203 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2019.2957129, IEEE
Transactions on Industrial Informatics
4

average response time for the tasks computed at the kth end- cloud, if necessary. So, the offloading time for the tasks at
user is given by the fog node is mainly dominated by the maximum value of
1 TLocal OL
fog,i , and Tfog,i,c . Finally, we calculate the total delay as
TLocal
k = , (3)    
µk − λCPU
k Total Local OL Local OL
Tk = max Tk , Tk,i + max Tfog,i , Tfog,i,c . (8)
where µk = 1/TCPUk .
As discussed earlier, the ith fog node receives the task from
i) its own end-users and ii) neighboring fog nodes. Thus, the IV. P ROBLEM F ORMULATION
average CPU cycles to compute the all the tasks at the fog We aim to find the optimal place to offload and compute
node side is given by, the task data to meet the user-specific deadline. However, a
|M
fog node receives the task data from multiple end-users and
Pi
M P
Ji Pj | even from its neighbouring fog nodes. Thus, the amount of
λOL
k Lk,a + βj,i λOL
k0 Lk0,a
k=1 j=1 k0 =1,k0 ∈K\Mi task data to be processed locally becomes a significant factor
Lfog,i = . due to the waiting delay at the computational capacity-limited
λfog,i
(4) fog node. At the same time, the allocation of the transmission
rate between a fog node and the cloud is another factor to
The service time at the ith fog node is expressed as TCPU
fog,i = be considered. Thus, we formulate the computation offloading
Lfog,i D/fi . Thus, the average response time for the tasks and uploading rate allocation from i) the end-users to fog node
computed at the ith fog node is given by, and ii) the fog node to the remote cloud, aiming to minimize
1 the task completion time for each end-user considering the
TLocal
fog,i = , (5)
µfog,i − λCPU waiting delay at the fog node.
fog,i
As we ignore the energy consumption issue in the compu-
where µfog,i = 1/TCPU
fog,i . tation offloading, the end-user can compute the tasks until the
delay deadline. Thus, we relax the TLocal
k in (8). Note that when
B. Transmission Delay the input data size is smaller than the amount of task processed
In our work, we take the assumption that the output data within the tolerable delay, then the computation offloading is
size of the task output is quite small in comparison with not necessary. Again, this assumption is not valid when we
the task input [23]. Thus the feedback time can be ignored consider the energy consumption.  In this work, our objective
compared to the task processing time, including queueing time is to minimize T0k = TOLk,i +max TLocal OL
fog,i , Tfog,i,c , accordingly,
and computation time. If the downloading of task outputs is our optimization problem is expressed as:
considered, we can simply incorporate the downloading time min T0k ∀ k (9a)
while calculating the end-to-end delay.
Thus, the transmission delay to upload the tasks from the s.t. αk,i , βi,j , γi,c ∈ [0, 1], (9b)
kth end-user to the ith fog node is given by, N
X
OL βi,j + γi,c = 1, (9c)
TOL
k,i = λk D/rk,i . (6)
j=1
In the same manner, the transmission time to offload the task λCPU
fog,i < µfog,i , (9d)
from the ith fog node to the remote cloud is expressed as rk,i > 0, (9e)
γi,c λOL
fog,i D Mi
X
TOL
fog,i,c = . (7) rk,i ≤ ri , (9f)
ri,c
k=1
In a large-scale industry, the wired connection, such as IEEE
ri,c > 0, (9g)
802.3/Ethernet is widely used. For this reason, we assume
N
X
that the fog nodes are connected with each other via wired
ri,c ≤ rc , (9h)
link [24]. Hence, we ignore the transmission time to offload
i=1
the tasks between fog nodes. Note that we can easily extend N
X
the network model to the scenario that considers the wireless Mi = K, (9i)
connectivity, however, at the expense of transmission delay. i=1

where the constraint (9b) represents the computation offload-


C. Total Delay ing decision parameters, the constraint in (9c) denotes that the
As the end-user computes and uploads the task data at total task data must be processed either locally or offloaded to
the same time, the overall task latency is the bigger value other computing devices, the constraint (9d) ensures that the
between the local task runtime and the offloading time which task arrival rate at the local task execution in the fog node must
includes uploading time plus the task data processing time. be lower than the service rate of the fog node. Further, (9e)
Furthermore, a fog node simultaneously performs: a) local is the non-negative constraint on the uploading rate between
execution of the task, b) offloading the task to the neighboring the kth end-user and the ith fog node. The constraint (9f)
fog nodes, and c) offloading the computation to the remote indicates that the uploading rate between the kth end-user and

1551-3203 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2019.2957129, IEEE
Transactions on Industrial Informatics
5

the ith fog node cannot exceed the total uploading rate between Let eq = [01×(q−1) 1 01×(8−q) ]| for 1 ≤ q ≤ 8, therefore,
Mi end-users and the ith fog node, i.e., ri . Besides, (9g) is the objective function is written as
the non-negative constraint on the transmission rate between
the ith fog node and the cloud. It is reasonable to impose rc min
wk,i,j
b|k wk,i,j + wk,i,j
|
A4 wk,i,j (15a)
regarding the maximum total uploading rate between the fog s.t. 0≤ e|u wk,i,j ≤ 1 ∀ u ∈ {1, 2, 3}, (15b)
node and the cloud in (9h). Finally, the constraint (9i) comes N
X

from the total number 
of end-users. e|2 wk,i,j + e|3 wk,i,j = 1, (15c)
Let ζ = max TLocal OL Local
fog,i , Tfog,i,c , such that Tfog,i ≤ ζ and j=i+1
TOL
fog,i,c ≤ ζ. From (7), we write e|7 wk,i,j ≤ C, (15d)
γi,c λOL
fog,i D ≤ ζ ri,c . (10) e|4 wk,i,j > 0, (15e)
Mi
X
Afterward, using (5), the objective function in (9a) becomes
    e|4 wk,i,j ≤ ri , (15f)
αk,i λk D αk,i λk D + ζ rk,i
min + ζ = min . (11) k=1
rk,i rk,i e|5 wk,i,j > 0, (15g)
We further replace the denominator by its supremum, ri , X N
which will give fast minimum. Then, the above objective e|5 wk,i,j ≤ rc , (15h)
function in (11) can be written as i=1
X N
min (αki λk D + ζ rki ) . (12)
e|8 wk,i,j = K, (15i)
We assume that the fog nodes are arranged in descending i=1
order based on the offloaded tasks from their respective and (14) . (15j)
end-users. Thus, for i = 1, i.e., the first fog node does
not receive any offloaded Now, to transform the above optimization problem into
PNdata from any other fog node.
Therefore, (9c) becomes j=i+1 βi,j + γi,c = 1 and we have ah homogeneous
i| separable QCQP form, we let Λk,i,j =
Pi−1 OL |
j=1, βj,i λfog,j ≤ µfog,i , i > 1. From (1), we obtain wk,i,j 1 . Thus, the above optimization problem becomes
PMi OL PMi OL
λfog,i ≤ k=1 λk + µfog,i . Let C = k=1 λk + µfog,i ,
so that λfog,i ≤ C . Accordingly, (10) can be rewritten as
min Λ|k,i,j Qk Λk,i,j (16a)
N Λk,i,j
X
αk,i βi,j CD + αk,i γi,c C D − ζ ri,c ≤ 0 . (13) s.t. 0 ≤ Λ|k,i,j Qu Λk,i,j ≤ 1 ∀ u ∈ {1, 2, 3}, (16b)
j=i+1 N
X
Let wk,i,j = [αk,i , βi,j , γi,c , rk,i , ri,c , ζ, λfog,i , Mi ]|8×1 be Λ|k,i,j Q2 Λk,i,j + Λ|k,i,j Q3 Λk,i,j = 1, (16c)
the variable matrix. Then, the matrix form of (13) becomes j=i+1
N
X  Λ|k,i,j Qλ Λk,i,j ≤ C, (16d)
| |
wk,i,j A1 wk,i,j + wk,i,j A2 + A3 wk,i,j ≤ 0 , (14) Λ|k,i,j Qr Λk,i,j > 0, (16e)
j=i+1
XMi
where   Λ|k,i,j Qr Λk,i,j ≤ ri , (16f)
0 CD
1 02×6  k=1
A1 = CD 0 ,
2
06×8 Λ|k,i,j Qc Λk,i,j > 0, (16g)
8×8
  N
X
0 0 CD Λ|k,i,j Qc Λk,i,j ≤ rc , (16h)
1 0 0 0 03×5 
A2 =   i=1
2  CD 0 0 
XN
05×8 8×8 Λ|k,i,j Qm Λk,i,j = K, (16i)
i=1
and  
04×2 XN
1 0 1  Λ|k,i,j Q1 Λk,i,j + Λ|k,i,j Q5 Λk,i,j ≤ 0 , (16j)
A3 =  08×4 08×2  ,
2 1 0  i=1

02×2 8×8 where


   
and the objective
 function in (12) becomes min wk,i,j + b|k 08×8 1
2 eu 08×8 12 bk
| Qu = ∀u ∈ {1, 2, 3} , Qk =
wk,i,j A4 wk,i,j , where bk = [λk D, 01×7 ]|1×8 and 1 |
2 eu0 1 |
2 bk 0
     
03×3 0 1 1
  Qr = 1 8×8 2 e4 0
, Q = 1 8×8
c 2 e5 ,
0 0 1 | |
1  2 e4 0 2 e5 0
A4 =   08×3 0 0 0 08×2 
 .
2     
1 0 0 0 1 1
Qm = 1 8×8 2 e8 , Qλ = 08×8 2 e7 ,
02×3 | 1 |
8×8 2 e8 0 2 e7 0

1551-3203 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2019.2957129, IEEE
Transactions on Industrial Informatics
6

   1

A1 08×1 0 e
Q1 = , Q2 = 1 8×8 2 2 , higher task arrival rate, the advantage of computational re-
01×8 0 2 e2 0 sources in cloud is observed, thereby reducing delay compared
 1
   to locally process at the end-user side and a standalone fog
0
Q3 = 1 8×8 2 e3 , and Q = A2 + A3 08×1 . node. Interestingly, our proposed optimal policy significantly
5
2 e3 0 01×8 0
minimizes the delay due to balancing the amount of data to
However, the transformed optimization problem is an NP- be processed at end-user side, fog node, and cloud.
hard non-convex QCQP problem. Thus, we apply SDR to In Fig. 3(c), we show the delay performance with the
solve it by the convex programming method. Define Yk,i,j ≡ number of end-users. For simplicity, in case of multiple fog
Λk,i,j Λ|k,i,j . We then drop the rank(Yk,i,j ) = 1. Therefore, nodes, we assume equal number of end-users per fog node. It
we obtain the following SDP problem is observed from the figure that the average delay increases
more sharply when the task arrival rate per user is higher.
min Tr(Qk Yk,i,j ) (17a) The main reason is that due to higher number of tasks arrival
Yk,i,j
rate per user, the total number of tasks becomes more, thus
s.t. 0 ≤ Tr(Qu Yk,i,j ) ≤ 1 ∀ u ∈ {1, 2, 3}, (17b)
due to computational resource limitation of end-users and fog
N
X nodes, most of the time, the task data are offloaded to the
Tr(Q2 Yk,i,j ) + Tr(Q3 Yk,i,j ) = 1, (17c)
cloud for computation. As a result, the delay increases due to
j=i+1
λ
transmission delay to upload these tasks to the cloud.
Tr(Q Yk,i,j ) ≤ C, (17d) The impact of uploading rate from the end-user to the
Tr(Qr Yk,i,j ) > 0, (17e) primary fog node, i.e., ri is shown in Fig. 4(a). When the
Mi
X standalone fog node is considered, the low uploading rate has
Tr(Qr Yk,i,j ) ≤ ri , (17f) an adverse impact on the delay compared to the fog-cloud
k=1 scenario. With the low uploading rate between end-user to
Tr(Qc Yk,i,j ) > 0, (17g) the fog node, although the offloading delay to the fog node
N
X has a significant contribution on the total delay, our proposed
Tr(Qc Yk,i,j ) ≤ rc , (17h) offloading policy optimally decides the place where to process
i=1 the tasks. Basically, when we consider the fog collaboration
XN as well as cloud connectivity, the proposed policy benefits
Tr(Qm Yk,i,j ) = K, (17i) from more computational resources compared to the case of
i=1 the standalone fog node. In addition, we see that with higher
XN uploading rate, although the offloading time between end-user
Tr(Q1 Yk,i,j ) + Tr(Q5 Yk,i,j ) ≤ 0 . (17j) to the fog node is negligible, the service rate of the fog nodes,
i=1 the transmission delay between fog to cloud dominates in
We solve the above SDP problem in a polynomial time using the total delay. Therefore, further increase of uploading rate
a standard SDP software SeDuMi [25]. between end-users and the fog node does not significantly
influence the delay performance.
Fig. 4(b) depicts the impact of uploading rate from the fog
V. S IMULATION R ESULTS
node to the cloud. As obvious, the fog to cloud uploading
In this section, we evaluate the performance of proposed rate does not have any impact on the standalone fog node.
optimal solution for computation offloading with Monte Carlo It is observed from Fig. 4(b) that as the uploading rate from
simulations. The results are averaged over 10, 000 different fog to cloud is increased, more task data can be offloaded
runs. The extensive simulations are conducted in MATLAB. to use the high computational resources of the cloud. As a
We set fk = 4 × 106 [cycles/s], fi = 10 × 106 [cycles/s], result, the total delay is reduced. Note that when the number
fc = 100 × 106 [cycles/s], task data size D = 256 bit, and La of fog nodes is increased, more end-users are connected. At
= 1900 [cycles/byte]. the same time, the uploading rate is shared among the fog
Fig. 3(a) illustrates the performance of average delay when nodes, consequently, offloading time from the fog nodes to
the end-users offload their tasks to a standalone fog node. The the cloud increases, thereby increasing the total delay.
standalone fog refers to a fog node that is not connected to Finally, Fig. 4(c) illustrates the delay performance with
cloud or neighboring fog nodes for further task offloading. different ratios of task processing density. In brief, we take
From the figure, it can be seen that when the tasks are two types of tasks that are uniformly distributed. As observed
offloaded to the fog node, the delay is minimized compared in Fig. 4(c), we see that the average delay increases when
to the local processing at end-user side. However, the delay the ratio of task processing density becomes higher. However,
increases significantly with the increase of end-users with a our proposed offloading policy outperforms the case using
standalone fog node. The reason is that the computational standalone fog node even at higher task processing density.
resource of a standalone fog node is fully utilized at higher
number of end-users as well as task arrival rate.
VI. C ONCLUSION
From Fig. 3(b), it is observed that the direct offloading of
task data to the cloud server exhibits linear increasing of delay In this paper, we investigated the computation offloading in
with the increase of task arrival rate per user. However, at a fog network while minimizing the task completion time. We

1551-3203 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2019.2957129, IEEE
Transactions on Industrial Informatics
7

200 250 55
Locally processed λ k= 20 tasks/second
← Locally 50
K=1, offloaded to fog 20 λ k= 30 tasks/second
200 processed
K=2, offloaded to fog ← Offloaded 45

Delay (in millisecond)


150 at end-user
Delay (in millisecond)

Delay (in millisecond)


K=3, offloaded to fog to the cloud
K=4, offloaded to fog 10 40
150
35
100 0 ← Standalone fog
0 10 20 30
K=4 100
25
50 K=3
K=2 50 ← Fog-cloud, 20
optimal policy
K=1 15
0 0
0 20 40 60 80 100 10
0 50 100 150 1 2 3 4 5 6 7 8 9 10
Task arrival rate, λ k (tasks per second) Task arrival rate (task per second) Number of end-users per fog node

(a) (b) (c)

Fig. 3. Average delay performance, ri = 1 Mbps (a) with different task arrival rate (b) with different task arrival rate and rc = 0.6 Mbps, and (c) Number
of fog node N = 3 and rc = 0.6 Mbps.

27 160
34 Standalone fog Standalone fog
26 Standalone fog 140
N=3, fog-cloud, optimal policy Fog-cloud, optimal policy
N=5, fog-cloud, optimal policy
32 N=10, fog-cloud, optimal policy
N=10, fog-cloud, optimal policy 120
Delay (in millisecond)

Delay (in millisecond)

Delay (in millisecond)


25
N=20, fog-cloud, optimal policy
30 100
24

28 80
23
60
26 22
40
24 21
20
22 20 0
1 2 3 4 5 6 7 1 2 3 4 5 6 1 1.2 1.4 1.6 1.8
End-user to fog uploading rate, ri (in bps) ×106 Fog to cloud uploading rate, r c (in bps) ×106 Task processing density ratio (High/Low processing density)

(a) (b) (c)

Fig. 4. Delay performance, the number of users per fog node = 5, (a) task arrival rate λk = 50 tasks/second, and rc = 0.6 Mbps, (b) task arrival rate λk
= 50 tasks/second, ri = 4 Mbps, and (c) task arrival rate λk = 45 tasks/second, N = 2, ri = 4 Mbps, rc = 0.6 Mbps, and low task processing density =
1900 cycles/byte.

have considered horizontal collaboration consisting of multiple [5] M.-H. Chen, B. Liang, and M. Dong, “Multi-user multi-task offloading
fog nodes and vertical collaboration with the remote cloud for and resource allocation in mobile cloud systems,” IEEE Trans. on
Wireless Commun., vol. 17, no. 10, pp. 6790–6805, Oct. 2018.
parallel task data offloading. Moreover, our scheme considered [6] M. Mukherjee, S. Kumar, M. Shojafar, Q. Zhang, and C. X. Mavromous-
the transmission delay and service time that consists of local takis, “Joint task offloading and resource allocation for delay-sensitive
processing time and waiting time. After a reformulation, we fog networks,” in Proc. IEEE ICC, May 2019, pp. 1–7.
apply SDR to the optimization problem and solve the SDP [7] S. Kosta, A. Aucinas, , and R. M. and, “ThinkAir: Dynamic resource al-
location and parallel execution in the cloud for mobile code offloading,”
problem in polynomial time. The simulation results show in Proc. IEEE INFOCOM, Mar. 2012, pp. 945–953.
the effectiveness of the proposed solution compared to the [8] S. W. Ko, K. Huang, S. L. Kim, and H. Chae, “Live prefetching
standalone task processing under different parameter settings. for mobile computation offloading,” IEEE Trans. Wireless Commun.,
vol. 16, no. 5, pp. 3057–3071, May 2017.
Our future work includes the interaction among the end- [9] Y. Wu, Y. He, L. P. Qian, J. Huang, and X. Shen, “Optimal resource
users, fog, and cloud to maximize the reliability with delay allocations for mobile data offloading via dual-connectivity,” IEEE
constraints. Trans. on Mobile Comput., vol. 17, no. 10, pp. 2349–2365, Oct. 2018.
[10] L. Jiao, H. Yin, H. Huang, D. Guo, and Y. Lyu, “Computation
offloading for multi-user mobile edge computing,” in Proc. IEEE
R EFERENCES HPCC/SmartCity/DSS, June 2018, pp. 422–429.
[1] F. Bonomi, R. Milito, J. Zhu, and S. Addepalli, “Fog computing and its [11] S. Yu, R. Langar, X. Fu, L. Wang, and Z. Han, “Computation offloading
role in the internet of things,” in Proceedings of the First Edition of the with data caching enhancement for mobile edge computing,” IEEE
MCC Workshop on Mobile Cloud Computing, Jan. 2012, pp. 13–16. Trans. on Vehi. Technol., vol. 67, no. 11, pp. 11 098–11 112, Nov 2018.
[2] M. Mukherjee, L. Shu, and D. Wang, “Survey of fog computing: [12] J. Du, L. Zhao, J. Feng, and X. Chu, “Computation offloading and
Fundamental, network applications, and research challenges,” IEEE resource allocation in mixed fog/cloud computing systems with min-
Commun. Surv. Tut., vol. 20, no. 3, pp. 1826–1857, 3rd quarter 2018. max fairness guarantee,” IEEE Trans. Commun., vol. 66, no. 4, pp.
[3] M. Aazam and E. N. Huh, “Fog Computing: The Cloud-IoT/IoE 1594–1608, Apr. 2018.
middleware paradigm,” IEEE Potentials, vol. 35, no. 3, pp. 40–44, May [13] M. Chen, M. Dong, and B. Liang, “Joint offloading decision and
2016. resource allocation for mobile cloud with computing access point,”
[4] Y. Wang, X. Tao, X. Zhang, P. Zhang, and Y. T. Hou, “Cooperative in Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Process.
task offloading in three-tier mobile computing networks: An ADMM (ICASSP), Mar. 2016.
framework,” IEEE Trans. on Vehi. Technol., vol. 68, no. 3, pp. 2763– [14] M.-H. Chen, B. Liang, and M. Dong, “A semidefinite relaxation ap-
2776, Mar 2019. proach to mobile cloud offloading with computing access point,” in

1551-3203 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2019.2957129, IEEE
Transactions on Industrial Informatics
8

Proc. IEEE 16th Int. Workshop on Signal Process. Advances in Wireless Suman Kumar received the M.Sc. degree in math-
Commun. (SPAWC), June 2015, pp. 1–5. ematics from the University of Hyderabad and the
[15] Z. Liu, X. Yang, Y. Yang, K. Wang, and G. Mao, “DATS: Dispersive Ph.D. degree in mathematics from IIT Patna, India.
stable task scheduling in heterogeneous fog networks,” IEEE Internet He has done research in mathematical control theory.
Things J., vol. 6, no. 2, pp. 3423–3436, Apr. 2019. He is currently an Assistant Professor of mathemat-
[16] J. Liu and Q. Zhang, “Offloading schemes in mobile edge computing ics with IGNTU Amarkantak, India. He has also
for ultra-reliable low latency communications,” IEEE Access, vol. 6, pp. served as a member with organizing committee in
12 825–12 837, Feb. 2018. numerous international conferences. His current re-
[17] J. Liu and Q. Zhang, “Code-partitioning offloading schemes in mobile search interests include control theory, delay differ-
edge computing for augmented reality,” IEEE Access, vol. 7, pp. 11 222– ential systems, abstract linear and nonlinear systems,
11 236, 2019. and modeling and mathematical analysis of wireless
[18] Y.-Y. Shih, W.-H. Chung, A.-C. Pang, T.-C. Chiu, and H.-Y. Wei, communication systems.
“Enabling low-latency applications in fog-radio access networks,” IEEE
Network, vol. 31, no. 1, pp. 52–58, Jan. 2017.
[19] M. Mukherjee, Y. Liu, J. Lloret, L. Guo, R. Matam, and M. Aazam,
“Transmission and latency-aware load balancing for fog radio access
networks,” in Proc. IEEE GLOBECOM, Dec. 2018, pp. 1–6.
[20] Y. Xiao and M. Krunz, “QoE and power efficiency tradeoff for fog Constandinos X. Mavromoustakis is currently a
computing networks with fog node cooperation,” in IEEE INFOCOM, Professor at the Department of Computer Science
May 2017, pp. 1–9. at the University of Nicosia, Cyprus. He received
[21] M. Mukherjee, S. Kumar, Q. Zhang, R. Matam, C. X. Mavromoustakis, a five-year dipl.Eng (BSc, BEng, Meng/KISATS
Y. Lv, and G. Mastorakis, “Task data offloading and resource allocation approved/accredited) in Electronic and Computer
in fog computing with multi-task delay guarantee,” IEEE Access, vol. 7, Engineering from Technical University of Crete,
pp. 152 911–152 918, Sept. 2019. Greece, MSc in Telecommunications from Univer-
[22] C.-F. Liu, M. Bennis, M. Debbah, and H. V. Poor, “Dynamic task sity College of London, UK, and his PhD from
offloading and resource allocation for ultra-reliable low-latency edge the department of Informatics at Aristotle University
computing,” IEEE Trans. Commun., vol. 67, no. 6, pp. 4132–4150, June of Thessaloniki, Greece. Professor Mavromoustakis
2019. is leading the Mobile Systems Lab. (MOSys Lab.,
[23] S.-W. Ko, K. Huang, S.-L. Kim, and H. Chae, “Live prefetching http://www.mosys.unic.ac.cy/) at the Department of Computer Science at the
for mobile computation offloading,” IEEE Trans. Wireless Commun., University of Nicosia and he is an active member (vice-chair) of IEEE/ R8
vol. 16, no. 5, pp. 3057–3071, May 2017. regional Cyprus section since Jan. 2016, and since May 2009 he serves as
[24] T. X. Tran, A. Hajisami, P. Pandey, and D. Pompili, “Collaborative the Chair of C16 Computer Society Chapter of the Cyprus IEEE section.
mobile edge computing in 5g networks: New paradigms, scenarios, and Prof. Mavromoustakis has a dense research work outcome in Mobile and
challenges,” IEEE Commun. Mag., vol. 55, no. 4, pp. 54–61, Apr. 2017. Wearable computing systems and the Internet-of-Things (IoT), consisting of
[25] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge Univ. numerous refereed publications (>230) including several Books (IDEA/IGI,
Press, Cambridge, U.K., 2004. Springer and Elsevier). He has served as a consultant to many industrial bodies
(including Intel Corporation LLC (www.intel.com)), and he is a management
member of IEEE Communications Society (ComSoc) Radio Communications
Committee (RCC) and a board member the IEEE-SA Standards IEEE SCC42
WG2040. He has participated in several FP7/H2020/Eureka and National
projects. He is a co-founder of the IEEE Technical Committee on IEEE SIG on
Big Data Intelligent Networking (IEEE TC BDIN SIG) and currently serves
as a Vice-chair.

Mithun Mukherjee [S’10–M’16] received the B.E.


degree in electronics and communication engineer-
ing from the University Institute of Technology, George Mastorakis George Mastorakis graduated
Burdwan University, Bardhaman, India, in 2007, from the Department of Electrical & Electronic
the M.E. degree in information and communication Engineering of University of Manchester Institute
engineering from the Indian Institute of Science of Science and Technology (UMIST), UK in July
and Technology, Shibpur, India, in 2009, and the 2000. He obtained his M.Sc. in Telecommunications
Ph.D. degree in electrical engineering from the In- from the Department of Electrical & Electronic
dian Institute of Technology Patna, Patna, India, in Engineering of University College London (UCL),
2015. He is currently an Assistant Professor with UK in November 2001 and his Ph.D. diploma in
the Guangdong Provincial Key Laboratory of Petro- the field of interactive digital television in Septem-
chemical Equipment Fault Diagnosis, Guangdong University of Petrochemical ber 2008 from the Department of Information &
Technology, Maoming, China. He has (co)authored more than 80 publications Communication Systems Engineering of University
in peer-reviewed international TRANSACTIONS/journals and conferences. of the Aegean in Greece. He serves as an Associate Professor and a
Dr. Mukherjee was a recipient of the 2016 EAI International Wireless Internet research associate at Hellenic Mediterranean University. He is the Director
Conference, the 2017 International Conference on Recent Advances on Signal of e-Business Intelligence Laboratory. His current research interests include
Processing, Telecommunications and Computing, the 2018 IEEE SYSTEMS cognitive radio networks, the Internet of Things, energy-efficient networks,
JOURNAL, and the 2018 IEEE International Conference on Advanced big data analytics, and mobile computing. He has participated in a large
Networks and Telecommunications Systems (ANTS) Best Paper Award. He number of European research projects. He has also served as a Technical
has been an Associate Editor of IEEE ACCESS and a Guest Editor of the Manager of several research projects funded by the General Secretariat for
IEEE INTERNET OF THINGS JOURNAL, the IEEE TRANSACTIONS Research and Technology (GSRT) of Greek Ministry of Development. He is
ON INDUSTRIAL INFORMATICS, ACM/Springer Mobile Networks and author of more than 250 research articles in refereed journals, international
Applications, and Sensors. His current research interests include wireless com- conferences and edited volumes (book chapters) of scientific books, published
munications, fog computing, and ultra-reliable low-latency communications. in IEEE, Elsevier, Springer, Wiley and Taylor & Francis. He is a member of
the Technical Chamber of Greece, IEEE and the National Accreditation Centre
of Vocational Training Structures and Accompanying Support Services. He
serves as evaluator of research projects proposals of the European Research
Programme ”Eurostars”, the ”Innovative Entrepreneurship” Programme and
the ”Digital Convergence” Operational Programme of Greece.

1551-3203 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2019.2957129, IEEE
Transactions on Industrial Informatics
9

Rakesh Matam [M’14] received his bachelor’s Qi Zhang [] received the M.Sc. and Ph.D. degrees
degree in computer science and engineering from in telecommunications from Technical University
Jawaharlal Nehru Technological University Hyder- of Denmark (DTU), Denmark, in 2005 and 2008,
abad, Hyderabad, the master’s degree from Kakatiya respectively. She is an Associate Professor with
University Warangal, India, and the Ph.D. degree in the Department of Engineering, Aarhus University,
computer science from IIT Patna in 2014. In 2014, Denmark. Besides her academic experiences, she has
he joined the Department of Computer Science, IIIT, various industrial experiences. Her research interests
as an Assistant Professor. He is currently a member include Tactile Internet, IoT, URLLC, Mobile Edge
of the Design and Innovation Center, IIIT Guwahati, Computing, Massive machine type communication,
and a Principal Investigator of a funded research Non-orthogonal multiple access (NOMA) and com-
project sponsored by the Government of India. His pressed sensing. She is serving as an Editor for
research interests are in wireless networks, network security, cloud and fog EURASIP Journal on Wireless Communications and Networking. She was
computing. a Co-Chair of the Co-operative and Cognitive Mobile Networks (CoCoNet)
Workshop in the ICC conference 2010-2015 and was a TPC Co-Chair of
BodyNets 2015.

Vikas Kumar received the B.E. degree in elec-


tronics and communication engineering from the
Institution of Engineers, India, in 2003, the M.Tech.
degree in VLSI and CAD from the Thapar Univer-
sity, Patiala, India, in 2008, and the Ph.D. degree
in electrical engineering from the Indian Institute of
Technology Patna, Patna, India, in 2017. Currently,
working as SDE-in-Charge with Bharat Sanchar
Nigam Limited, Patna, India. He has worked in
the field of electronic switching (EWSD, OCB, C-
DOT exchanges) and networking. Dr. Vikas Kumar
was a recipient of the 2016 CSPA Best Paper Award. His major research
interests are in the development of real-time communication systems, VLSI
implementation for digital signal processing and VLSI architectural designs,
FPGA Based System Design, CORDIC.

1551-3203 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

You might also like