Latency-Driven Parallel Task Data Offloading in Fog Computing Networks For Industrial Applications
Latency-Driven Parallel Task Data Offloading in Fog Computing Networks For Industrial Applications
fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2019.2957129, IEEE
Transactions on Industrial Informatics
1
Abstract—Fog computing leverages the computational re- latency, network congestion and communication costs arise
sources at the network edge to meet the increasing demand with the physical distance between the data sources and
for latency-sensitive applications in large-scale industries. In this the remote cloud. To address these issues, fog computing
paper, we study the computation offloading in a fog computing
network where the end-users, most of the time, offload part of paradigm [1] extends the facilities offered by the cloud to
their tasks to a fog node. Nevertheless, limited by the computa- the edge of the network. That is, fog computing brings part
tional and storage resources, the fog node further simultaneously of the cloud functionality to the edge of the network, and
offloads the task data to the neighboring fog nodes and/or thereby supports geographically distributed, latency-sensitive,
the remote cloud server to obtain the additional computing and Quality-of-Service (QoS)-demanded IoT applications [1]–
resources. However, meanwhile, the offloaded tasks from the
neighboring node incur burden to the fog node. Moreover, the [3]. Fog computing acts as an intermediate layer of storage and
task offloading to the remote cloud server can suffer from limited computing facility between the IoT devices and cloud and, in
communication resources. Thus, to jointly optimize the amount of turn, reduces the need to access the cloud frequently. By doing
tasks offloaded to the neighboring fog nodes and communication so, fog computing significantly lowers the end-to-end delay,
resource allocation for the offloaded tasks to the remote cloud, communication cost, and congestion and thereby improves the
we formulate a latency-driven task data offloading problem
considering the transmission delay from fog to the cloud and overall performance of the IoT system. The fog computing
service rate that includes the local processing time and waiting layer is usually comprised of network devices such as edge
time at each fog node. The optimization problem is formulated routers, gateways, and access points which run on different
as a Quadratically Constraint Quadratic Programming (QCQP). software and so on, therefore, it is very challenging to design
We solve the problem by semidefinite relaxation. The simulation protocols that enable different fog nodes to collaborate with
results demonstrate that the proposed strategy is effective and
scalable under various simulation settings. each others [4]–[6].
A. Motivation
I. I NTRODUCTION
In a typical fog computing system, the resource-constraint
With an ever-increasing number of Internet of Things (IoT) end-users can offload the data of computation tasks to some
devices, managing the data generated by them is a real fog nodes in the vicinity. However, due to the computational
challenge. For example, massive IoT devices in smart factories and storage resource constraint within a fog node, the fog
continuously generate sensor data that needs to be transmitted, node often seeks resources from the cloud data center (ver-
stored and processed for effective monitoring and control. tical collaboration) and/or the other fog nodes (horizontal
Data centers offered by the cloud address this problem to collaboration). Extra latency and energy consumption could be
a significant extent. Nonetheless, several drawbacks, such as introduced in both offloading cases. To elaborate, for vertical
collaboration, where the fog node tries to offload the task
This work was supported by Guangdong science and technology innovation
strategy Grant No. 2018KJ011. (Corresponding author: Mithun Mukherjee.) data to the remote cloud, the limited capacity of the uplink
M. Mukherjee is with the Guangdong Provincial Key Laboratory of Petro- results in a further delay for completing the task. Similarly,
chemical Equipment Fault Diagnosis, Guangdong University of Petrochemical the lack of sufficient computation and storage resources in a
Technology, Maoming 525000, China, e-mail: [email protected]
S. Kumar is with the Department of Mathematics, IGNTU, Amarkantak fog node becomes an issue in horizontal collaboration with
484886, India, e-mail: [email protected] neighboring fog nodes, although the transmission latency is
C. X. Mavromoustakis is with the Mobile Systems Laboratory (MoSys less compared to the fog-to-cloud scenario. Therefore, the
Lab), Department of Computer Science, University of Nicosia, 1700 Nicosia,
Cyprus, e-mail: [email protected] critical yet unsolved challenge is to select the offloading
G. Mastorakis is with the Department of Management Science and Tech- location, i.e., the neighboring fog node or the remote cloud,
nology, Hellenic Mediterranean University, 72100 Crete, Greece, e-mail: and split the task while guaranteeing end-user’s delay deadline
[email protected]
R. Matam is with the Department of Computer Science and Engineering, with varying network traffic.
Indian Institute of Information Technology Guwahati, Guwahati 781015,
India, e-mail: [email protected]
V. Kumar was with Electrical Engineering Department, Indian Institute of B. Related Work
Technology, Patna, India, 801103, now he is with Bharat Sanchar Nigam Recently, several works focused on computation offloading
Limited, Patna, Bihar, India, 800001, e-mail: [email protected]
Q. Zhang is with the DIGIT, Department of Engineering, Aarhus University, in fog-edge-cloud computing scenarios [7]–[9] and mobile
8000 Aarhus, Denmark, e-mail: [email protected] edge computing [10], [11]. In literature, the authors mainly
1551-3203 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2019.2957129, IEEE
Transactions on Industrial Informatics
2
1551-3203 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2019.2957129, IEEE
Transactions on Industrial Informatics
3
1551-3203 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2019.2957129, IEEE
Transactions on Industrial Informatics
4
average response time for the tasks computed at the kth end- cloud, if necessary. So, the offloading time for the tasks at
user is given by the fog node is mainly dominated by the maximum value of
1 TLocal OL
fog,i , and Tfog,i,c . Finally, we calculate the total delay as
TLocal
k = , (3)
µk − λCPU
k Total Local OL Local OL
Tk = max Tk , Tk,i + max Tfog,i , Tfog,i,c . (8)
where µk = 1/TCPUk .
As discussed earlier, the ith fog node receives the task from
i) its own end-users and ii) neighboring fog nodes. Thus, the IV. P ROBLEM F ORMULATION
average CPU cycles to compute the all the tasks at the fog We aim to find the optimal place to offload and compute
node side is given by, the task data to meet the user-specific deadline. However, a
|M
fog node receives the task data from multiple end-users and
Pi
M P
Ji Pj | even from its neighbouring fog nodes. Thus, the amount of
λOL
k Lk,a + βj,i λOL
k0 Lk0,a
k=1 j=1 k0 =1,k0 ∈K\Mi task data to be processed locally becomes a significant factor
Lfog,i = . due to the waiting delay at the computational capacity-limited
λfog,i
(4) fog node. At the same time, the allocation of the transmission
rate between a fog node and the cloud is another factor to
The service time at the ith fog node is expressed as TCPU
fog,i = be considered. Thus, we formulate the computation offloading
Lfog,i D/fi . Thus, the average response time for the tasks and uploading rate allocation from i) the end-users to fog node
computed at the ith fog node is given by, and ii) the fog node to the remote cloud, aiming to minimize
1 the task completion time for each end-user considering the
TLocal
fog,i = , (5)
µfog,i − λCPU waiting delay at the fog node.
fog,i
As we ignore the energy consumption issue in the compu-
where µfog,i = 1/TCPU
fog,i . tation offloading, the end-user can compute the tasks until the
delay deadline. Thus, we relax the TLocal
k in (8). Note that when
B. Transmission Delay the input data size is smaller than the amount of task processed
In our work, we take the assumption that the output data within the tolerable delay, then the computation offloading is
size of the task output is quite small in comparison with not necessary. Again, this assumption is not valid when we
the task input [23]. Thus the feedback time can be ignored consider the energy consumption. In this work, our objective
compared to the task processing time, including queueing time is to minimize T0k = TOLk,i +max TLocal OL
fog,i , Tfog,i,c , accordingly,
and computation time. If the downloading of task outputs is our optimization problem is expressed as:
considered, we can simply incorporate the downloading time min T0k ∀ k (9a)
while calculating the end-to-end delay.
Thus, the transmission delay to upload the tasks from the s.t. αk,i , βi,j , γi,c ∈ [0, 1], (9b)
kth end-user to the ith fog node is given by, N
X
OL βi,j + γi,c = 1, (9c)
TOL
k,i = λk D/rk,i . (6)
j=1
In the same manner, the transmission time to offload the task λCPU
fog,i < µfog,i , (9d)
from the ith fog node to the remote cloud is expressed as rk,i > 0, (9e)
γi,c λOL
fog,i D Mi
X
TOL
fog,i,c = . (7) rk,i ≤ ri , (9f)
ri,c
k=1
In a large-scale industry, the wired connection, such as IEEE
ri,c > 0, (9g)
802.3/Ethernet is widely used. For this reason, we assume
N
X
that the fog nodes are connected with each other via wired
ri,c ≤ rc , (9h)
link [24]. Hence, we ignore the transmission time to offload
i=1
the tasks between fog nodes. Note that we can easily extend N
X
the network model to the scenario that considers the wireless Mi = K, (9i)
connectivity, however, at the expense of transmission delay. i=1
1551-3203 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2019.2957129, IEEE
Transactions on Industrial Informatics
5
the ith fog node cannot exceed the total uploading rate between Let eq = [01×(q−1) 1 01×(8−q) ]| for 1 ≤ q ≤ 8, therefore,
Mi end-users and the ith fog node, i.e., ri . Besides, (9g) is the objective function is written as
the non-negative constraint on the transmission rate between
the ith fog node and the cloud. It is reasonable to impose rc min
wk,i,j
b|k wk,i,j + wk,i,j
|
A4 wk,i,j (15a)
regarding the maximum total uploading rate between the fog s.t. 0≤ e|u wk,i,j ≤ 1 ∀ u ∈ {1, 2, 3}, (15b)
node and the cloud in (9h). Finally, the constraint (9i) comes N
X
from the total number
of end-users. e|2 wk,i,j + e|3 wk,i,j = 1, (15c)
Let ζ = max TLocal OL Local
fog,i , Tfog,i,c , such that Tfog,i ≤ ζ and j=i+1
TOL
fog,i,c ≤ ζ. From (7), we write e|7 wk,i,j ≤ C, (15d)
γi,c λOL
fog,i D ≤ ζ ri,c . (10) e|4 wk,i,j > 0, (15e)
Mi
X
Afterward, using (5), the objective function in (9a) becomes
e|4 wk,i,j ≤ ri , (15f)
αk,i λk D αk,i λk D + ζ rk,i
min + ζ = min . (11) k=1
rk,i rk,i e|5 wk,i,j > 0, (15g)
We further replace the denominator by its supremum, ri , X N
which will give fast minimum. Then, the above objective e|5 wk,i,j ≤ rc , (15h)
function in (11) can be written as i=1
X N
min (αki λk D + ζ rki ) . (12)
e|8 wk,i,j = K, (15i)
We assume that the fog nodes are arranged in descending i=1
order based on the offloaded tasks from their respective and (14) . (15j)
end-users. Thus, for i = 1, i.e., the first fog node does
not receive any offloaded Now, to transform the above optimization problem into
PNdata from any other fog node.
Therefore, (9c) becomes j=i+1 βi,j + γi,c = 1 and we have ah homogeneous
i| separable QCQP form, we let Λk,i,j =
Pi−1 OL |
j=1, βj,i λfog,j ≤ µfog,i , i > 1. From (1), we obtain wk,i,j 1 . Thus, the above optimization problem becomes
PMi OL PMi OL
λfog,i ≤ k=1 λk + µfog,i . Let C = k=1 λk + µfog,i ,
so that λfog,i ≤ C . Accordingly, (10) can be rewritten as
min Λ|k,i,j Qk Λk,i,j (16a)
N Λk,i,j
X
αk,i βi,j CD + αk,i γi,c C D − ζ ri,c ≤ 0 . (13) s.t. 0 ≤ Λ|k,i,j Qu Λk,i,j ≤ 1 ∀ u ∈ {1, 2, 3}, (16b)
j=i+1 N
X
Let wk,i,j = [αk,i , βi,j , γi,c , rk,i , ri,c , ζ, λfog,i , Mi ]|8×1 be Λ|k,i,j Q2 Λk,i,j + Λ|k,i,j Q3 Λk,i,j = 1, (16c)
the variable matrix. Then, the matrix form of (13) becomes j=i+1
N
X Λ|k,i,j Qλ Λk,i,j ≤ C, (16d)
| |
wk,i,j A1 wk,i,j + wk,i,j A2 + A3 wk,i,j ≤ 0 , (14) Λ|k,i,j Qr Λk,i,j > 0, (16e)
j=i+1
XMi
where Λ|k,i,j Qr Λk,i,j ≤ ri , (16f)
0 CD
1 02×6 k=1
A1 = CD 0 ,
2
06×8 Λ|k,i,j Qc Λk,i,j > 0, (16g)
8×8
N
X
0 0 CD Λ|k,i,j Qc Λk,i,j ≤ rc , (16h)
1 0 0 0 03×5
A2 = i=1
2 CD 0 0
XN
05×8 8×8 Λ|k,i,j Qm Λk,i,j = K, (16i)
i=1
and
04×2 XN
1 0 1 Λ|k,i,j Q1 Λk,i,j + Λ|k,i,j Q5 Λk,i,j ≤ 0 , (16j)
A3 = 08×4 08×2 ,
2 1 0 i=1
1551-3203 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2019.2957129, IEEE
Transactions on Industrial Informatics
6
1
A1 08×1 0 e
Q1 = , Q2 = 1 8×8 2 2 , higher task arrival rate, the advantage of computational re-
01×8 0 2 e2 0 sources in cloud is observed, thereby reducing delay compared
1
to locally process at the end-user side and a standalone fog
0
Q3 = 1 8×8 2 e3 , and Q = A2 + A3 08×1 . node. Interestingly, our proposed optimal policy significantly
5
2 e3 0 01×8 0
minimizes the delay due to balancing the amount of data to
However, the transformed optimization problem is an NP- be processed at end-user side, fog node, and cloud.
hard non-convex QCQP problem. Thus, we apply SDR to In Fig. 3(c), we show the delay performance with the
solve it by the convex programming method. Define Yk,i,j ≡ number of end-users. For simplicity, in case of multiple fog
Λk,i,j Λ|k,i,j . We then drop the rank(Yk,i,j ) = 1. Therefore, nodes, we assume equal number of end-users per fog node. It
we obtain the following SDP problem is observed from the figure that the average delay increases
more sharply when the task arrival rate per user is higher.
min Tr(Qk Yk,i,j ) (17a) The main reason is that due to higher number of tasks arrival
Yk,i,j
rate per user, the total number of tasks becomes more, thus
s.t. 0 ≤ Tr(Qu Yk,i,j ) ≤ 1 ∀ u ∈ {1, 2, 3}, (17b)
due to computational resource limitation of end-users and fog
N
X nodes, most of the time, the task data are offloaded to the
Tr(Q2 Yk,i,j ) + Tr(Q3 Yk,i,j ) = 1, (17c)
cloud for computation. As a result, the delay increases due to
j=i+1
λ
transmission delay to upload these tasks to the cloud.
Tr(Q Yk,i,j ) ≤ C, (17d) The impact of uploading rate from the end-user to the
Tr(Qr Yk,i,j ) > 0, (17e) primary fog node, i.e., ri is shown in Fig. 4(a). When the
Mi
X standalone fog node is considered, the low uploading rate has
Tr(Qr Yk,i,j ) ≤ ri , (17f) an adverse impact on the delay compared to the fog-cloud
k=1 scenario. With the low uploading rate between end-user to
Tr(Qc Yk,i,j ) > 0, (17g) the fog node, although the offloading delay to the fog node
N
X has a significant contribution on the total delay, our proposed
Tr(Qc Yk,i,j ) ≤ rc , (17h) offloading policy optimally decides the place where to process
i=1 the tasks. Basically, when we consider the fog collaboration
XN as well as cloud connectivity, the proposed policy benefits
Tr(Qm Yk,i,j ) = K, (17i) from more computational resources compared to the case of
i=1 the standalone fog node. In addition, we see that with higher
XN uploading rate, although the offloading time between end-user
Tr(Q1 Yk,i,j ) + Tr(Q5 Yk,i,j ) ≤ 0 . (17j) to the fog node is negligible, the service rate of the fog nodes,
i=1 the transmission delay between fog to cloud dominates in
We solve the above SDP problem in a polynomial time using the total delay. Therefore, further increase of uploading rate
a standard SDP software SeDuMi [25]. between end-users and the fog node does not significantly
influence the delay performance.
Fig. 4(b) depicts the impact of uploading rate from the fog
V. S IMULATION R ESULTS
node to the cloud. As obvious, the fog to cloud uploading
In this section, we evaluate the performance of proposed rate does not have any impact on the standalone fog node.
optimal solution for computation offloading with Monte Carlo It is observed from Fig. 4(b) that as the uploading rate from
simulations. The results are averaged over 10, 000 different fog to cloud is increased, more task data can be offloaded
runs. The extensive simulations are conducted in MATLAB. to use the high computational resources of the cloud. As a
We set fk = 4 × 106 [cycles/s], fi = 10 × 106 [cycles/s], result, the total delay is reduced. Note that when the number
fc = 100 × 106 [cycles/s], task data size D = 256 bit, and La of fog nodes is increased, more end-users are connected. At
= 1900 [cycles/byte]. the same time, the uploading rate is shared among the fog
Fig. 3(a) illustrates the performance of average delay when nodes, consequently, offloading time from the fog nodes to
the end-users offload their tasks to a standalone fog node. The the cloud increases, thereby increasing the total delay.
standalone fog refers to a fog node that is not connected to Finally, Fig. 4(c) illustrates the delay performance with
cloud or neighboring fog nodes for further task offloading. different ratios of task processing density. In brief, we take
From the figure, it can be seen that when the tasks are two types of tasks that are uniformly distributed. As observed
offloaded to the fog node, the delay is minimized compared in Fig. 4(c), we see that the average delay increases when
to the local processing at end-user side. However, the delay the ratio of task processing density becomes higher. However,
increases significantly with the increase of end-users with a our proposed offloading policy outperforms the case using
standalone fog node. The reason is that the computational standalone fog node even at higher task processing density.
resource of a standalone fog node is fully utilized at higher
number of end-users as well as task arrival rate.
VI. C ONCLUSION
From Fig. 3(b), it is observed that the direct offloading of
task data to the cloud server exhibits linear increasing of delay In this paper, we investigated the computation offloading in
with the increase of task arrival rate per user. However, at a fog network while minimizing the task completion time. We
1551-3203 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2019.2957129, IEEE
Transactions on Industrial Informatics
7
200 250 55
Locally processed λ k= 20 tasks/second
← Locally 50
K=1, offloaded to fog 20 λ k= 30 tasks/second
200 processed
K=2, offloaded to fog ← Offloaded 45
Fig. 3. Average delay performance, ri = 1 Mbps (a) with different task arrival rate (b) with different task arrival rate and rc = 0.6 Mbps, and (c) Number
of fog node N = 3 and rc = 0.6 Mbps.
27 160
34 Standalone fog Standalone fog
26 Standalone fog 140
N=3, fog-cloud, optimal policy Fog-cloud, optimal policy
N=5, fog-cloud, optimal policy
32 N=10, fog-cloud, optimal policy
N=10, fog-cloud, optimal policy 120
Delay (in millisecond)
28 80
23
60
26 22
40
24 21
20
22 20 0
1 2 3 4 5 6 7 1 2 3 4 5 6 1 1.2 1.4 1.6 1.8
End-user to fog uploading rate, ri (in bps) ×106 Fog to cloud uploading rate, r c (in bps) ×106 Task processing density ratio (High/Low processing density)
Fig. 4. Delay performance, the number of users per fog node = 5, (a) task arrival rate λk = 50 tasks/second, and rc = 0.6 Mbps, (b) task arrival rate λk
= 50 tasks/second, ri = 4 Mbps, and (c) task arrival rate λk = 45 tasks/second, N = 2, ri = 4 Mbps, rc = 0.6 Mbps, and low task processing density =
1900 cycles/byte.
have considered horizontal collaboration consisting of multiple [5] M.-H. Chen, B. Liang, and M. Dong, “Multi-user multi-task offloading
fog nodes and vertical collaboration with the remote cloud for and resource allocation in mobile cloud systems,” IEEE Trans. on
Wireless Commun., vol. 17, no. 10, pp. 6790–6805, Oct. 2018.
parallel task data offloading. Moreover, our scheme considered [6] M. Mukherjee, S. Kumar, M. Shojafar, Q. Zhang, and C. X. Mavromous-
the transmission delay and service time that consists of local takis, “Joint task offloading and resource allocation for delay-sensitive
processing time and waiting time. After a reformulation, we fog networks,” in Proc. IEEE ICC, May 2019, pp. 1–7.
apply SDR to the optimization problem and solve the SDP [7] S. Kosta, A. Aucinas, , and R. M. and, “ThinkAir: Dynamic resource al-
location and parallel execution in the cloud for mobile code offloading,”
problem in polynomial time. The simulation results show in Proc. IEEE INFOCOM, Mar. 2012, pp. 945–953.
the effectiveness of the proposed solution compared to the [8] S. W. Ko, K. Huang, S. L. Kim, and H. Chae, “Live prefetching
standalone task processing under different parameter settings. for mobile computation offloading,” IEEE Trans. Wireless Commun.,
vol. 16, no. 5, pp. 3057–3071, May 2017.
Our future work includes the interaction among the end- [9] Y. Wu, Y. He, L. P. Qian, J. Huang, and X. Shen, “Optimal resource
users, fog, and cloud to maximize the reliability with delay allocations for mobile data offloading via dual-connectivity,” IEEE
constraints. Trans. on Mobile Comput., vol. 17, no. 10, pp. 2349–2365, Oct. 2018.
[10] L. Jiao, H. Yin, H. Huang, D. Guo, and Y. Lyu, “Computation
offloading for multi-user mobile edge computing,” in Proc. IEEE
R EFERENCES HPCC/SmartCity/DSS, June 2018, pp. 422–429.
[1] F. Bonomi, R. Milito, J. Zhu, and S. Addepalli, “Fog computing and its [11] S. Yu, R. Langar, X. Fu, L. Wang, and Z. Han, “Computation offloading
role in the internet of things,” in Proceedings of the First Edition of the with data caching enhancement for mobile edge computing,” IEEE
MCC Workshop on Mobile Cloud Computing, Jan. 2012, pp. 13–16. Trans. on Vehi. Technol., vol. 67, no. 11, pp. 11 098–11 112, Nov 2018.
[2] M. Mukherjee, L. Shu, and D. Wang, “Survey of fog computing: [12] J. Du, L. Zhao, J. Feng, and X. Chu, “Computation offloading and
Fundamental, network applications, and research challenges,” IEEE resource allocation in mixed fog/cloud computing systems with min-
Commun. Surv. Tut., vol. 20, no. 3, pp. 1826–1857, 3rd quarter 2018. max fairness guarantee,” IEEE Trans. Commun., vol. 66, no. 4, pp.
[3] M. Aazam and E. N. Huh, “Fog Computing: The Cloud-IoT/IoE 1594–1608, Apr. 2018.
middleware paradigm,” IEEE Potentials, vol. 35, no. 3, pp. 40–44, May [13] M. Chen, M. Dong, and B. Liang, “Joint offloading decision and
2016. resource allocation for mobile cloud with computing access point,”
[4] Y. Wang, X. Tao, X. Zhang, P. Zhang, and Y. T. Hou, “Cooperative in Proc. IEEE Int. Conf. on Acoustics, Speech and Signal Process.
task offloading in three-tier mobile computing networks: An ADMM (ICASSP), Mar. 2016.
framework,” IEEE Trans. on Vehi. Technol., vol. 68, no. 3, pp. 2763– [14] M.-H. Chen, B. Liang, and M. Dong, “A semidefinite relaxation ap-
2776, Mar 2019. proach to mobile cloud offloading with computing access point,” in
1551-3203 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2019.2957129, IEEE
Transactions on Industrial Informatics
8
Proc. IEEE 16th Int. Workshop on Signal Process. Advances in Wireless Suman Kumar received the M.Sc. degree in math-
Commun. (SPAWC), June 2015, pp. 1–5. ematics from the University of Hyderabad and the
[15] Z. Liu, X. Yang, Y. Yang, K. Wang, and G. Mao, “DATS: Dispersive Ph.D. degree in mathematics from IIT Patna, India.
stable task scheduling in heterogeneous fog networks,” IEEE Internet He has done research in mathematical control theory.
Things J., vol. 6, no. 2, pp. 3423–3436, Apr. 2019. He is currently an Assistant Professor of mathemat-
[16] J. Liu and Q. Zhang, “Offloading schemes in mobile edge computing ics with IGNTU Amarkantak, India. He has also
for ultra-reliable low latency communications,” IEEE Access, vol. 6, pp. served as a member with organizing committee in
12 825–12 837, Feb. 2018. numerous international conferences. His current re-
[17] J. Liu and Q. Zhang, “Code-partitioning offloading schemes in mobile search interests include control theory, delay differ-
edge computing for augmented reality,” IEEE Access, vol. 7, pp. 11 222– ential systems, abstract linear and nonlinear systems,
11 236, 2019. and modeling and mathematical analysis of wireless
[18] Y.-Y. Shih, W.-H. Chung, A.-C. Pang, T.-C. Chiu, and H.-Y. Wei, communication systems.
“Enabling low-latency applications in fog-radio access networks,” IEEE
Network, vol. 31, no. 1, pp. 52–58, Jan. 2017.
[19] M. Mukherjee, Y. Liu, J. Lloret, L. Guo, R. Matam, and M. Aazam,
“Transmission and latency-aware load balancing for fog radio access
networks,” in Proc. IEEE GLOBECOM, Dec. 2018, pp. 1–6.
[20] Y. Xiao and M. Krunz, “QoE and power efficiency tradeoff for fog Constandinos X. Mavromoustakis is currently a
computing networks with fog node cooperation,” in IEEE INFOCOM, Professor at the Department of Computer Science
May 2017, pp. 1–9. at the University of Nicosia, Cyprus. He received
[21] M. Mukherjee, S. Kumar, Q. Zhang, R. Matam, C. X. Mavromoustakis, a five-year dipl.Eng (BSc, BEng, Meng/KISATS
Y. Lv, and G. Mastorakis, “Task data offloading and resource allocation approved/accredited) in Electronic and Computer
in fog computing with multi-task delay guarantee,” IEEE Access, vol. 7, Engineering from Technical University of Crete,
pp. 152 911–152 918, Sept. 2019. Greece, MSc in Telecommunications from Univer-
[22] C.-F. Liu, M. Bennis, M. Debbah, and H. V. Poor, “Dynamic task sity College of London, UK, and his PhD from
offloading and resource allocation for ultra-reliable low-latency edge the department of Informatics at Aristotle University
computing,” IEEE Trans. Commun., vol. 67, no. 6, pp. 4132–4150, June of Thessaloniki, Greece. Professor Mavromoustakis
2019. is leading the Mobile Systems Lab. (MOSys Lab.,
[23] S.-W. Ko, K. Huang, S.-L. Kim, and H. Chae, “Live prefetching http://www.mosys.unic.ac.cy/) at the Department of Computer Science at the
for mobile computation offloading,” IEEE Trans. Wireless Commun., University of Nicosia and he is an active member (vice-chair) of IEEE/ R8
vol. 16, no. 5, pp. 3057–3071, May 2017. regional Cyprus section since Jan. 2016, and since May 2009 he serves as
[24] T. X. Tran, A. Hajisami, P. Pandey, and D. Pompili, “Collaborative the Chair of C16 Computer Society Chapter of the Cyprus IEEE section.
mobile edge computing in 5g networks: New paradigms, scenarios, and Prof. Mavromoustakis has a dense research work outcome in Mobile and
challenges,” IEEE Commun. Mag., vol. 55, no. 4, pp. 54–61, Apr. 2017. Wearable computing systems and the Internet-of-Things (IoT), consisting of
[25] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge Univ. numerous refereed publications (>230) including several Books (IDEA/IGI,
Press, Cambridge, U.K., 2004. Springer and Elsevier). He has served as a consultant to many industrial bodies
(including Intel Corporation LLC (www.intel.com)), and he is a management
member of IEEE Communications Society (ComSoc) Radio Communications
Committee (RCC) and a board member the IEEE-SA Standards IEEE SCC42
WG2040. He has participated in several FP7/H2020/Eureka and National
projects. He is a co-founder of the IEEE Technical Committee on IEEE SIG on
Big Data Intelligent Networking (IEEE TC BDIN SIG) and currently serves
as a Vice-chair.
1551-3203 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TII.2019.2957129, IEEE
Transactions on Industrial Informatics
9
Rakesh Matam [M’14] received his bachelor’s Qi Zhang [] received the M.Sc. and Ph.D. degrees
degree in computer science and engineering from in telecommunications from Technical University
Jawaharlal Nehru Technological University Hyder- of Denmark (DTU), Denmark, in 2005 and 2008,
abad, Hyderabad, the master’s degree from Kakatiya respectively. She is an Associate Professor with
University Warangal, India, and the Ph.D. degree in the Department of Engineering, Aarhus University,
computer science from IIT Patna in 2014. In 2014, Denmark. Besides her academic experiences, she has
he joined the Department of Computer Science, IIIT, various industrial experiences. Her research interests
as an Assistant Professor. He is currently a member include Tactile Internet, IoT, URLLC, Mobile Edge
of the Design and Innovation Center, IIIT Guwahati, Computing, Massive machine type communication,
and a Principal Investigator of a funded research Non-orthogonal multiple access (NOMA) and com-
project sponsored by the Government of India. His pressed sensing. She is serving as an Editor for
research interests are in wireless networks, network security, cloud and fog EURASIP Journal on Wireless Communications and Networking. She was
computing. a Co-Chair of the Co-operative and Cognitive Mobile Networks (CoCoNet)
Workshop in the ICC conference 2010-2015 and was a TPC Co-Chair of
BodyNets 2015.
1551-3203 (c) 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.