Dynamic Traffic Scheduling and Congestion Control Across Data Centers Based On SDN
Dynamic Traffic Scheduling and Congestion Control Across Data Centers Based On SDN
Article
Dynamic Traffic Scheduling and Congestion Control
across Data Centers Based on SDN
Dong Sun 1 , Kaixin Zhao 2 , Yaming Fang 3 and Jie Cui 3, * ID
Abstract: Software-defined Networking (SDN) and Data Center Network (DCN) are receiving
considerable attention and eliciting widespread interest from both academia and industry. When the
traditionally shortest path routing protocol among multiple data centers is used, congestion will
frequently occur in the shortest path link, which may severely reduce the quality of network services
due to long delay and low throughput. The flexibility and agility of SDN can effectively ameliorate
the aforementioned problem. However, the utilization of link resources across data centers is still
insufficient, and has not yet been well addressed. In this paper, we focused on this issue and proposed
an intelligent approach of real-time processing and dynamic scheduling that could make full use of
the network resources. The traffic among the data centers could be classified into different types,
and different strategies were proposed for these types of real-time traffic. Considering the prolonged
occupation of the bandwidth by malicious flows, we employed the multilevel feedback queue
mechanism and proposed an effective congestion control algorithm. Simulation experiments showed
that our scheme exhibited the favorable feasibility and demonstrated a better traffic scheduling effect
and great improvement in bandwidth utilization across data centers.
Keywords: data center; dynamic scheduling; congestion control; OpenFlow; software-defined networking
1. Introduction
Big data has become one of the hottest topics among academia and industry. With the development
of big data, the amount of data from different sources such as the Internet of Things, social networking
websites, and scientific research is increasing at an exponential rate [1]. The scale of data centers has
gradually extended. Meanwhile, an increasing number of data are being transmitted in data center
networks, and traffic exchange among the servers in data centers has also been growing fast. The most
direct result may be low utilization ratio, congestion problems, service latency, and even DDOS
attacks [2]. Data centers are always interconnected through wide area networks [3]. When traditional
routing protocols are used in data center networks, flows are forced to preempt the shortest path to
be routed and forwarded, which might lead the shortest path link to be under full load while some
new flows are still competing for it, and other links are under low load. Without shunting for flows,
the link would easily be congested and unable to provide normal network services. The most direct,
but expensive solution, is to remold and upgrade the network. However, on account of the scale,
complexity, and heterogeneity of current computer networks, traditional approaches to configuring the
network devices, monitoring and optimizing network performance, identifying and solving network
problems, and planning network growth would become nearly impossible and inefficient [4].
select
paths different
accordingpaths according
to their types and to their types and
the real-time theinformation
link real-time link information
(Figure 1b), and(Figure 1b), and the
the congestion can
congestion can also
also be well controlled. be well controlled.
source
flow
B router B
S1 A C S1 A C
D D
D E D E
S2 S2
destination
(a) (b)
Figure
Figure 1.1.Flows thatthat
Flows preempt the shortest
preempt path (a)
the shortest pathcan(a)becan
shunted into twointo
be shunted different paths (b) paths
two different with
DSCSD.
(b) with DSCSD.
The main contributions of this paper are as follows. First, we proposed a new approach for
The main contributions of this paper are as follows. First, we proposed a new approach for traffic
traffic scheduling that could route a newly arrived flow based on the real-time link information, and
scheduling that could route a newly arrived flow based on the real-time link information, and also
also dynamically schedule flows on the link. Better than traditional approaches and the SDN-based
dynamically schedule flows on the link. Better than traditional approaches and the SDN-based scheme
scheme with threshold value, we could improve the efficiency of the link utilizing the shortest paths.
with threshold value, we could improve the efficiency of the link utilizing the shortest paths. Second,
Second, we innovatively adopted a multilevel feedback queue mechanism of congestion control
we innovatively adopted a multilevel feedback queue mechanism of congestion control suitable for
suitable for different types of flows and could realize anomaly detection by preventing malicious
different types of flows and could realize anomaly detection by preventing malicious flows from
flows from occupying the bandwidth for a long time.
occupying the bandwidth for a long time.
The remainder of the paper is organized as follows: In Section 2, we review the traditional
The remainder of the paper is organized as follows: In Section 2, we review the traditional network
network and SDN used in data centers. Then, we elaborate on the design and implementation of our
and SDN used in data centers. Then, we elaborate on the design and implementation of our proposed
proposed scheme, DSCSD (Dynamic Scheduling and Congestion control across data centers based
scheme, DSCSD (Dynamic Scheduling and Congestion control across data centers based on SDN) in
Future Internet 2018, 10, 64 3 of 12
Section 3. The performance evaluation of DSCSD is presented in Section 4. Finally, Section 5 concludes
the paper.
2. Related Work
This section reviews recent studies on traffic engineering in data centers which served as our
research background and theoretical foundation. We discussed this from two aspects including a data
center based on a traditional network and a data center based on SDN as well as its challenges. The first
aspect explains the development of distributed data centers and the limitations of the traditional
network protocols used in current data centers. These limitations exactly prove the demand of our
proposed research. The other aspect shows some recent research progress of SDN-based data centers.
distributed controllers can realize the expansion of the network and conveniently manage network
devices [18–20]. Koponen et al. [21] presented Onix, on which the SDN control plane could be
implemented as a distributed system. Benson et al. [22] proposed MicroTE, a fine-grained routing
approach for data centers where he found a lack of multi-path routing, plus an overall view of
workloads and routing strategies in data center networks based on a traditional network. Hindman
et al. [23] presented Mesos, a platform for fine-grained resource sharing in data centers. Curtis [24]
proposed Mahout to handle elephant flows (i.e., large flows) while large flows in Socket buffers
were detected. Due to the burst of the traffic and the failure of the equipment, congestion and
failure frequently occur. Large flows and small flows are always mingled in data center networks.
Kanagavelu et al. [25] proposed a flow-based edge–to–edge rerouting scheme. When congestions
appeared, it focused on rerouting the large flows to alternative links. The reason for this mechanism is
that shifting these short-lived small flows among links would additionally increase the overhead and
latency. Khurshid et al. [26] provided a layer between a software-defined networking controller and
network devices called VeriFlow. It is a network debugging tool to find faulty rules issued by SDN
applications and anomalous network behavior. FP Tso [27] presented the Baatdaat flow scheduling
algorithm using spare data center network capacity to mitigate the performance degradation of heavily
utilized links. This could ensure real-time dynamic scheduling, which could avoid congestion caused
by instantaneous flows. Li et al. [28] proposed a traffic scheduling solution based on the fuzzy synthetic
evaluation mechanism, and the path could be dynamically adjusted according to the overall view of
the network.
In summary, most of the current active approaches solving traffic engineering in data center
networks based on SDN have two alternatives. The first common approach is based on a single
centralized controller, with the problem of scalability and reliability. The other method is based on
multiple controllers, but the cooperation of multiple controllers and the consistent update mechanism
has not been well addressed thus far [29]. Our proposed scheme is an attempt that first applies the
multilevel feedback queue mechanism to traffic scheduling and congestion control across data centers
based on SDN, and can also provide a new solution to congestion caused by the prolonged occupation
of malicious flows.
SDN Controller
S2 S3
low priority
traffic
S1
PC2
S5
PC3
high priority
S4
traffic
PC1
Figure
Figure 2.
2. System
System model.
model.
According to the aforementioned analysis of the link states, we can realize dynamic traffic
scheduling to improve the utilization of bandwidth resources.
t Link
flow waiting queue 1 transmission
2t Link
flow waiting queue 2 transmission
4.
4. Experiment
Experiment Results
Results and
and Performance
Performance Analysis
Analysis
The
The SDNSDN controller
controller selected
selected in in our
our scheme
scheme is is aa free
free open
open source
source Floodlight
Floodlight thatthat runsruns in in the
the
eclipse environment on the Ubuntu system. The virtual switch uses
eclipse environment on the Ubuntu system. The virtual switch uses open source Open vSwitch 2.3.0. open source Open vSwitch 2.3.0.
The
The virtual network was
virtual network was created
created byby Mininet
Mininet in in Ubuntu
Ubuntu 14.04.
14.04.
We
We implemented DSCSD on top of the Floodlight v1.2 controller
implemented DSCSD on top of the Floodlight v1.2 controller in in our
our simulation
simulation experiments.
experiments.
Floodlight
Floodlight is is aa free source, and
free source, and modules
modules can can be randomly added
be randomly added and and deleted,
deleted, so so that
that it provides
it provides
much
much more convenience for our test. In our experimental setup, the virtual switch was created by
more convenience for our test. In our experimental setup, the virtual switch was created by
Open
Open vSwitch.
vSwitch. The The Floodlight
Floodlight was was chosen
chosen as as the
the SDN
SDN controller.
controller. We We added
added two modules named
two modules named
initflowtable
initflowtable and and trafficmanage
trafficmanage on Floodlight. The
on Floodlight. The module
module initflowable
initflowable was was used
used to to initialize
initialize the the
variable with a value and install some flow tables of preprocessing. The
variable with a value and install some flow tables of preprocessing. The main function of our scheme main function of our scheme
was the module
was the module trafficmanage
trafficmanage that that could
could monitor
monitor the the Packet-in
Packet-in messages
messages and and thethe Flow-removed
Flow-removed
messages
messages of of the
the OpenFlow
OpenFlow protocolprotocol and
and recode
recode the the installed
installed flowflow tables
tables asas well as link
well as link information
information
including congestion and bandwidth. When a new flow arrives
including congestion and bandwidth. When a new flow arrives or a flow needs to be scheduled, or a flow needs to be scheduled, the
module
the module trafficmanage
trafficmanage can can
achieve accurate
achieve accurate andandreal-time detection
real-time detectionandandcontrol.
control.
As
As shown in Figure 4, we used several virtual hosts to simulate three datadata
shown in Figure 4, we used several virtual hosts to simulate three centers;
centers; h1 and h1 h2
and wereh2
were the servers of two individual data centers; c1, c2, c3, c4, and c5
the servers of two individual data centers; c1, c2, c3, c4, and c5 were collectively used as another data were collectively used as
another data center.
center. Certainly, weCertainly,
should assign we should
virtualassign virtual and
IP addresses IP addresses
MAC addressesand MAC for addresses
them. We also for them.
used
We also used the flows from any one of h1 and h2 to any one of c1, c2,
the flows from any one of h1 and h2 to any one of c1, c2, c3, c4, and c5 to simulate the interconnected c3, c4, and c5 to simulate the
interconnected flows among the data centers. In accordance with the
flows among the data centers. In accordance with the system model in Figure 2, the path (S1, S4, S5) system model in Figure 2, the
path
was the(S1,shortest
S4, S5)path wasleaving
the shortest
(S1, S2,path leaving
S3, S4) (S1, S2,
the second S3, S4)Here,
shortest. the second shortest.two
we conducted Here,
testswe to
conducted two tests to verify the effectiveness
verify the effectiveness and performance of DSCSD. and performance of DSCSD.
Test
Test 1:1: Verify
Verifythe theeffectiveness
effectivenessofofDSCSD. DSCSD.We Weusedused Mininet
Mininet to to
setset
a 4aM-bandwidth
4 M-bandwidth for for
bothboththe
the shortest path and the second shortest path, and we also utilized
shortest path and the second shortest path, and we also utilized the tool iperf to simulate four flowsthe tool iperf to simulate four
flows
from h1 from
to c1h1and
to c1c2,andandc2,from
andh2from
to c3h2andto c3 c4.and
Eachc4.link
Eachwas linkequipped
was equippedwith the with the requirement
requirement of a 2
of a 2 M-bandwidth.
M-bandwidth. We discussed
We discussed and analyzed
and analyzed the performance
the performance of the traditional
of the traditional networknetwork
and DSCSD. and
DSCSD. Then, we chose a span under the selfsame condition to test
Then, we chose a span under the selfsame condition to test the loss tolerance and real-time bandwidththe loss tolerance and real-time
bandwidth
of these flows. of Some
these of flows.
theseSome of presented
data are these datainare presented
Table 1. Here, in weTable
adopted 1. Here, we adopted
four symbolic four
notations.
symbolic
The notation notations.
“h1–1” The means notation “h1–1”
the traffic frommeans
h1 to c1.theBy traffic from“h1–2”
analogy, h1 to c1. By analogy,
means the traffic“h1–2”
from h1 means
to c2;
the traffic from h1 to c2; “h2–c3” means the traffic from h2 to c3; and “h2–c4”
“h2–c3” means the traffic from h2 to c3; and “h2–c4” means the traffic from h2 to c4. We also calculated means the traffic from
h2
theto c4. Welink
overall alsoutilization
calculatedofthe overall
both caseslink
as isutilization
shown in Figureof both5.cases as is shown in Figure 5.
In Test 1, it held that the loss tolerance of the four flows in a traditional network was high, with
two of them close to 100%. As for the overall link utilization, the average link utilization of our
Future Internet 2018, 10, x FOR PEER REVIEW 8 of 12
scheme
Future was2018,
Internet kept atFOR
10, x 97%, while
PEER that
of traditional network was just 48%. For a comparison,
REVIEW 8 ofwe
12
verified the high improvement of link utilization by using DSCSD.
scheme was kept at 97%, while that of traditional network was just 48%. For a comparison, we
Future Internet
verified high10,
the 2018, 64
improvement of link utilization by using DSCSD. 8 of 12
Floodlight
Floodlight
S2 S3 C1
S2 S3 C1 C2
S1
h2
S5 C3
S1 C2
h2 S4 C4
S5 C3
h1
S4 C5 C4
h1
Figure 4. Experimental topology. C5
Figure
Table 4.
4. Experimental
1. Real-time
Figure bandwidth
Experimental topology.
comparison.
topology.
bps Traditional
Table Network bandwidth comparison. DSCSD
Table 1.
1. Real-time
Real-time bandwidth comparison.
Time h1–1 h1–c2 h2–c3 h2–c4 h2–c4 h2–c4 h2–c4 h2–c4
bpss
0–1 106 kTraditional
1.87 M Network
1.91 M 25.3 k 1.95 M 1.94DSCSD M 1.95 M 1.94 M
bps Traditional Network DSCSD
Time
1–2 s h1–1
11.8 k h1–c2
1.86 M h2–c3
1.81 M h2–c4
70.6 k h2–c4
1.94 M h2–c4
1.95 M h2–c4
1.94 M h2–c4
Time h1–1 h1–c2 h2–c3 h2–c4 h2–c4 h2–c4 h2–c4 h2–c4 M
1.94
0–1
2–3 ss 106
11.8 k 1.87
k 1.89 M 1.91
M 1.89 M 25.3
M 35.3 k 1.95
k 1.94 M 1.94
M 1.94 M 1.95
1.94MM 1.95 M
1.94MM 1.94
1.95 M
M
0–1 s 106 k 1.87 M 1.91 M 25.3 k 1.95 M 1.94 M
1–2
3–4 s
1–2 ss
11.8
153 k 1.86
11.8kk 1.92
M
1.86 M
1.81
M 1.80
M
M 70.6
1.81 M 11.8 k 1.94
70.6 kk 1.95
M
M 1.95
1.94 M 1.94MM
1.95
1.94
1.94MM
M 1.94 M 1.94
1.94MM
1.94 M
2–3
2–3sss
4–5 11.8 k 1.89
11.8kk 1.75
35.3 M
1.89 M 1.89
M 1.94 M
1.89 M 35.3
M 176 k 1.94
35.3 kk 1.94 M
1.94 M 1.94
M 1.94 M 1.94
1.95MM
1.95MM 1.94 M 1.95
1.94MM
1.95 M
3–4ss
3–4 153kk 1.92
153 1.92MM 1.801.80 M
M 11.8
11.8 k
k 1.95
1.95 M
M 1.94 M
1.94 M 1.94
1.94 MM 1.94
1.94 MM
5–6
4–5 ss 82.3
35.3kk 1.94
1.75 M
M 1.89 M 82.3
1.94 M 176 kk 1.82 M 1.95
1.94 M 1.94MM 1.95
1.94MM 1.94MM
1.94
4–5
5–6sss
6–7 35.3
82.3kkk 1.75
35.3 1.94M
1.95 M 1.94
M 1.89 M
1.88 M 176
M 82.3 k
23.5 kk 1.94
1.94 M
M 1.95
1.82 M 1.94MM
1.94 1.95
1.87MM
M 1.94 M 1.94
1.95MM
1.94 M
5–6
6–7sss
7–8 82.3
35.3kkk 1.94
23.5 1.95M
1.94 M 1.89
M 1.88 M
1.88 M 82.3
M 23.5 k
58.8 kk 1.82 M
M 1.94
1.94 M
1.95 1.94MM
1.94 1.94
1.93MM
M 1.87 M 1.94
1.94MM
1.95 M
7–8ss
6–7 23.5kk 1.95
35.3 1.94MM 1.881.88 M
M 23.5
58.8 k
k 1.94
1.95 M
M 1.94 M
1.94 M 1.93
1.87 MM 1.94
1.95 MM
8–9
8–9 ss 11.8
11.8kk 1.89
1.89 M
M 1.92 M 23.5
1.92 M 23.5 kk 1.95 M 1.95
1.95 M 1.95MM 1.94
1.94MM 1.94MM
1.94
7–8
9–10ss
9–10 23.5
23.5kkk 1.94
23.5 1.91M
1.91 M 1.88
M 1.90 M
1.90 M 58.8
M 11.8 k
11.8 kk 1.95
1.94 M
M 1.94
1.94 M 1.95MM
1.95 1.93
1.94MM
M 1.94 M 1.94
1.93MM
1.93 M
8–9 s 11.8 k 1.89 M 1.92 M 23.5 k 1.95 M 1.95 M 1.94 M 1.94 M
9–10 s 23.5 k 1.91 M 1.90 M 11.8 k 1.94 M 1.95 M 1.94 M 1.93 M
Figure 5.
Figure 5. Overall
Overall link
link utilization
utilization comparison.
comparison.
Future
Future Internet 2018,
2018, 10, x
x FOR PEER REVIEW
Here, Internet
we divided 10,Test
FOR PEERtwo
2 into cases: real-time bandwidth without congestion control queues99 and
REVIEW of
of 12
12
real-time bandwidth with congestion control queues, then we discussed and analyzed the effectiveness
effectiveness
effectiveness of
of congestion
congestion control in
in these two
two cases. The
The real-time bandwidths inin two
two cases are
of congestion control in thesecontrol
two cases. these cases.
The real-time real-time
bandwidths in twobandwidths cases
cases are respectively are
shown
respectively
respectively shown
shown in
in Figures
Figures 6
6 and
and 7.
7.
in Figures 6 and 7.
Figure
Figure 6.
6. Real-time
Real-time bandwidth
bandwidth without
without congestion
congestion control
control queues.
queues.
Figure 7. Real-time
Figure 7. Real-time bandwidth
bandwidth with
with congestion
congestion control queues.
control queues.
queues.
At
At 10
At 10 s,
10 s, the
s, the path
the path (S1,
path (S1,S4,
(S1, S4,S5)
S4, S5)has
S5) hasbeen
has beensaturated
been saturatedwith
saturated withthe
with the
the flows
flows
flows from
from
fromc5, c1,
c5,c5, and
c1,c1,
and c2. At
At 20
c2. c2.
and Ats,
20 s,20we
wes,
received
received the
the request
request of
of the
the normal
normal flows
flows from
from c3
c3 and
and c4.
c4. At
At this
this time,
time, if
if
we received the request of the normal flows from c3 and c4. At this time, if we do not use congestionwe
we do
do not
not use
use congestion
congestion
control queues
control queues
control queues (case(case 1),
(case 1), the
1), the flow
the flow from
flow from
from c5 c5 with
c5 with high
with high priority
high priority directly
priority directly affects
directly affects the
affects the quality
the quality
quality ofof
of thethe
the
transmission
transmission of
transmission ofofother other
other flows,
flows,
flows, whichwhich
whichcan becan be observed
canobserved
be observed
in Figurein Figure
in 6.Figure 6. In
6. In we
In contrast, contrast,
contrast,
noticedwewe noticed
the noticed
comparison the
the
comparison
comparison
in Figure 7. Ifinwe
in Figure
Figure 7.congestion
used7. If we
If we used
used control
congestion
congestion control
control
queues (casequeues
queues
2), when(case
(case we 2),received
2), when we
when we received
received
the the
requestthe request
request
of normal
of normal
of normal
flows flows
fromflows from
c3 andfrom c3 and
c3 and
c4, flows fromc4, flows
c4, c5
flows
were from
from c5 were
c5 wereinto
scheduled scheduled
scheduled into the congestion
into thecontrol
the congestion congestion
queues.control
control queues.
queues.
Considering
Considering
Considering the
the bandwidth
bandwidth remained
remained on
on the
the path,
path, the
the flow
flow from
from c5
c5 will
will be
be
the bandwidth remained on the path, the flow from c5 will be rescheduled to the path to be transmitted. rescheduled
rescheduled to
to the
the path
path
to
to be
be transmitted.
transmitted. With
With this
this mechanism,
mechanism, we
we can
can provide
provide a
a new
new solution
solution
With this mechanism, we can provide a new solution to control the congestion caused by the prolonged to
to control
control the
the congestion
congestion
caused
caused by
by the
occupation the prolonged
prolongedflows.
of malicious occupation
occupation of of malicious
malicious flows.
flows.
As
As shown
shown in
in Figure
Figure 8,
8, we
we
As shown in Figure 8, we demonstrated the
demonstrated
demonstrated the comparison
comparison of
the comparison of the link
the link
of the
delay
link delay
delay in in the
in the
the twotwo
two
aforementioned
aforementioned cases.
aforementioned
cases.
cases.We We
Wecan can
can easily
easily
easily draw
draw
draw the
the the
conclusion
conclusion
conclusion
that
that that
by using
by using
by using the congestion
the congestion
the congestion
control
controlcontrol
queue,
queue,
queue, the
the congestion
congestion caused
caused by
by malicious
malicious flows can be
be well
well addressed with aa relative reduction in
the congestion caused by malicious flows canflows canaddressed
be well addressed with reduction
with a relative relative in
reduction
link delay.in
link delay.
link delay.
Future Internet 2018, 10, 64 10 of 12
Author Contributions: D.S. conceived and designed the system model and algorithm; K.Z. was responsible for
literature retrieval and chart making; Y.F. performed the experiments, analyzed the data, and wrote the paper;
J.C. designed the research plan, conceived the algorithm model, and polished the paper.
Funding: This research was funded by the [National Natural Science Foundation of China] grant number
[61502008], the [Key Scientific Research Project of Henan Higher Education] grant number [16A520084],
the [Natural Science Foundation of Anhui Province] grant number [1508085QF132] and [the Doctoral Research
Start-Up Funds Project of Anhui University].
Conflicts of Interest: The authors declare no conflict of interest.
Future Internet 2018, 10, 64 11 of 12
References
1. Cui, L.; Yu, F.R.; Yan, Q. When big data meets software-defined networking: SDN for big data and big data
for SDN. IEEE Netw. 2016, 30, 58–65. [CrossRef]
2. Lan, Y.L.; Wang, K.; Hsu, Y.H. Dynamic load-balanced path optimization in SDN-based data center networks.
In Proceedings of the 10th International Symposium on Communication Systems, Networks and Digital
Signal Processing, Prague, Czech Republic, 20–22 July 2016; pp. 1–6.
3. Ghaffarinejad, A.; Syrotiuk, V.R. Load Balancing in a Campus Network Using Software Defined Networking.
In Proceedings of the Third GENI Research and Educational Experiment Workshop, Atlanta, GA, USA,
19–20 March 2014; pp. 75–76.
4. Xia, W.; Wen, Y.; Foh, C.; Niyato, D.; Xie, H. A Survey on Software-Defined Networking. IEEE Commun.
Surv. Tutor. 2015, 17, 27–51. [CrossRef]
5. Nunes, A.; Mendonca, M.; Nguyen, X.; Obraczka, K.; Turletti, T. A Survey of Software-Defined Networking:
Past, Present, and Future of Programmable Networks. IEEE Commun. Surv. Tutor. 2014, 16, 1617–1634.
[CrossRef]
6. Lin, P.; Bi, J.; Wang, Y. WEBridge: West–east bridge for distributed heterogeneous SDN NOSes peering.
Secur. Commun. Netw. 2015, 8, 1926–1942. [CrossRef]
7. Sezer, S.; Scott-Hayward, S.; Chouhan, P.K.; Fraser, B.; Lake, D.; Finnegan, J.; Viljoen, N.; Miller, M.; Rao, N.
Are we ready for SDN? Implementation challenges for software-defined networks. IEEE Commun. Mag.
2013, 51, 36–43. [CrossRef]
8. Mckeown, N.; Anderson, T.; Balakrishnan, H.; Parulkar, G.; Peterson, L.; Rexford, J.; Shenker, S.; Turner, J.
OpenFlow: Enabling innovation in campus networks. ACM SIGCOMM Comput. Commun. Rev. 2008,
38, 69–74. [CrossRef]
9. Kim, H.; Feamster, N. Improving network management with software defined networking.
IEEE Commun. Mag. 2013, 51, 114–119. [CrossRef]
10. Greenberg, A.; Hamilton, J.; Maltz, D.A.; Patel, P. The cost of a cloud: Research problems in data center
networks. ACM SIGCOMM Comput. Commun. Rev. 2008, 39, 68–73. [CrossRef]
11. Cheng, J.; Cheng, J.; Zhou, M.; Liu, F.; Gao, S.; Liu, C. Routing in Internet of Vehicles: A Review. IEEE Trans.
Intell. Transp. Syst. 2015, 16, 2339–2352. [CrossRef]
12. Ghemawat, S.; Gobioff, H.; Leung, S.T. The Google file system. In Proceedings of the Nineteenth ACM
Symposium on Operating Systems Principles, Bolton Landing, NY, USA, 19–22 October 2003; pp. 29–43.
13. Shvachko, K.; Kuang, H.; Radia, S.; Chansler, R. The Hadoop Distributed File System. In Proceedings of the
IEEE 26th Symposium on MASS Storage Systems and Technologies, Incline Village, NV, USA, 3–7 May 2010;
pp. 1–10.
14. Dean, J.; Ghemawat, S. MapReduce: Simplified Data Processing on Large Clusters. In Proceedings of the
6th Conference on Symposium on Opearting Systems Design & Implementation, San Francisco, CA, USA,
6–8 December 2004; pp. 137–150.
15. Ali, S.T.; Sivaraman, V.; Radford, A.; Jha, S. A Survey of Securing Networks Using Software Defined
Networking. IEEE Trans. Reliab. 2015, 64, 1–12. [CrossRef]
16. Tavakoli, A.; Casado, M.; Koponen, T.; Shenker, S. Applying NOX to the Datacenter. In Proceedings of the
Eighth ACM Workshop on Hot Topics in Networks (HotNets-VIII), New York, NY, USA, 22–23 October 2009.
17. Tootoonchian, A.; Ganjali, Y. HyperFlow: A distributed control plane for OpenFlow. In Proceedings of the
Internet Network Management Conference on Research on Enterprise Networking, San Jose, CA, USA,
27 April 2010; p. 3.
18. Yu, Y.; Lin, Y.; Zhang, J.; Zhao, Y.; Han, J.; Zheng, H.; Cui, Y.; Xiao, M.; Li, H.; Peng, Y.; et al. Field
Demonstration of Datacenter Resource Migration via Multi-Domain Software Defined Transport Networks
with Multi-Controller Collaboration. In Proceedings of the Optical Fiber Communication Conference,
San Francisco, CA, USA, 9–13 March 2014; pp. 1–3.
19. Zhang, C.; Hu, J.; Qiu, J.; Chen, Q. Reliable Output Feedback Control for T-S Fuzzy Systems with
Decentralized Event Triggering Communication and Actuator Failures. IEEE Trans. Cybern. 2017,
47, 2592–2602. [CrossRef] [PubMed]
Future Internet 2018, 10, 64 12 of 12
20. Zhang, C.; Feng, G.; Qiu, J.; Zhang, W. T-S Fuzzy-model-based Piecewise H_infinity Output Feedback
Controller Design for Networked Nonlinear Systems with Medium Access Constraint. Fuzzy Sets Syst. 2014,
248, 86–105. [CrossRef]
21. Koponen, T.; Casado, M.; Gude, N.S.; Stribling, J.; Poutievski, L.; Zhu, M.; Ramanathan, R.; Iwata, Y.;
Inoue, H.; Hama, T.; et al. Onix: A distributed control platform for large-scale production networks.
In Proceedings of the Usenix Symposium on Operating Systems Design and Implementation, Vancouver,
BC, Canada, 4–6 October 2010; pp. 351–364.
22. Benson, T.; Anand, A.; Akella, A.; Zhang, M. MicroTE: Fine grained traffic engineering for data centers.
In Proceedings of the CONEXT, Tokyo, Japan, 6–9 December 2011.
23. Hindman, B.; Konwinski, A.; Zaharia, M.; Ghodsi, A.; Joseph, A.D.; Katz, R.; Shenker, S.; Stoica, I. Mesos:
A Platform for Fine-Grained Resource Sharing in the Data Center. In Proceedings of the 8th USENIX
Conference on Networked Systems Design and Implementation, San Jose, CA, USA, 25–27 April 2012;
pp. 429–483.
24. Curtis, A.R.; Kim, W.; Yalagandula, P. Mahout: Low-overhead datacenter traffic management using
end-host-based elephant detection. In Proceedings of the 2011 Proceedings IEEE INFOCOM, Shanghai,
China, 10–15 April 2011; pp. 1629–1637.
25. Kanagavelu, R.; Mingjie, L.N.; Mi, K.M.; Lee, B.; Francis; Heryandi. OpenFlow based control for re-routing
with differentiated flows in Data Center Networks. In Proceedings of the 18th IEEE International Conference
on Networks, Singapore, 12–14 December 2012; pp. 228–233.
26. Khurshid, A.; Zou, X.; Zhou, W.; Caesar, M.; Godfrey, P.B. Veriflow: Verifying network-wide invariants in
real time. ACM SIGCOMM Comput. Commun. Rev. 2012, 42, 467–472. [CrossRef]
27. Tso, F.P.; Pezaros, D.P. Baatdaat: Measurement-based flow scheduling for cloud data centers. In Proceedings
of the 2013 IEEE Symposium on Computers and Communications (ISCC), Split, Croatia, 7–10 July 2013.
28. Li, J.; Chang, X.; Ren, Y.; Zhang, Z.; Wang, G. An Effective Path Load Balancing Mechanism Based on SDN.
In Proceedings of the IEEE 13th International Conference on Trust, Security and Privacy in Computing and
Communications, Beijing, China, 24–26 September 2014; pp. 527–533.
29. Li, D.; Wang, S.; Zhu, K.; Xia, S. A survey of network update in SDN. Front. Comput. Sci. 2017, 11, 4–12.
[CrossRef]
30. Jain, S.; Kumar, A.; Mandal, S.; Ong, J.; Poutievski, L.; Singh, A.; Venkata, S.; Wanderer, J.; Zhou, J.; Zhu, M.;
et al. B4: Experience with a globally-deployed software defined wan. ACM SIGCOMM Comput. Commun. Rev.
2013, 43, 3–14. [CrossRef]
31. Alizadeh, M.; Atikoglu, B.; Kabbani, A.; Lakshmikantha, A.; Pan, R.; Prabhakar, B.; Seaman, M. Data center
transport mechanisms: Congestion control theory and IEEE standardization. In Proceedings of the 46th
Annual Allerton Conference on Communication, Control, and Computing, Urbana-Champaign, IL, USA,
23–26 September 2008; pp. 1270–1277.
32. Duan, Q.; Ansari, N.; Toy, M. Software-defined network virtualization: An architectural framework for
integrating SDN and NFV for service provisioning in future networks. IEEE Netw. 2016, 30, 10–16. [CrossRef]
33. Zhong, H.; Fang, Y.; Cui, J. Reprint of “LBBSRT: An efficient SDN load balancing scheme based on server
response time”. Futur. Gener. Comput. Syst. 2018, 80, 409–416. [CrossRef]
34. Shu, R.; Ren, F.; Zhang, J.; Zhang, T.; Lin, C. Analysing and improving convergence of quantized congestion
notification in Data Center Ethernet. Comput. Netw. 2018, 130, 51–64. [CrossRef]
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).