0% found this document useful (0 votes)
105 views12 pages

Dynamic Traffic Scheduling and Congestion Control Across Data Centers Based On SDN

Uploaded by

dj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
105 views12 pages

Dynamic Traffic Scheduling and Congestion Control Across Data Centers Based On SDN

Uploaded by

dj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

future internet

Article
Dynamic Traffic Scheduling and Congestion Control
across Data Centers Based on SDN
Dong Sun 1 , Kaixin Zhao 2 , Yaming Fang 3 and Jie Cui 3, * ID

1 Experimental Management Center, Henan Institute of Technology, Xinxiang 453000, China;


[email protected]
2 Department of Computer Science and Technology, Henan Institute of Technology, Xinxiang 453003, China;
[email protected]
3 School of Computer Science and Technology, Anhui University, Hefei 230039, China; [email protected]
* Corresponding: [email protected]

Received: 20 June 2018; Accepted: 7 July 2018; Published: 9 July 2018 

Abstract: Software-defined Networking (SDN) and Data Center Network (DCN) are receiving
considerable attention and eliciting widespread interest from both academia and industry. When the
traditionally shortest path routing protocol among multiple data centers is used, congestion will
frequently occur in the shortest path link, which may severely reduce the quality of network services
due to long delay and low throughput. The flexibility and agility of SDN can effectively ameliorate
the aforementioned problem. However, the utilization of link resources across data centers is still
insufficient, and has not yet been well addressed. In this paper, we focused on this issue and proposed
an intelligent approach of real-time processing and dynamic scheduling that could make full use of
the network resources. The traffic among the data centers could be classified into different types,
and different strategies were proposed for these types of real-time traffic. Considering the prolonged
occupation of the bandwidth by malicious flows, we employed the multilevel feedback queue
mechanism and proposed an effective congestion control algorithm. Simulation experiments showed
that our scheme exhibited the favorable feasibility and demonstrated a better traffic scheduling effect
and great improvement in bandwidth utilization across data centers.

Keywords: data center; dynamic scheduling; congestion control; OpenFlow; software-defined networking

1. Introduction
Big data has become one of the hottest topics among academia and industry. With the development
of big data, the amount of data from different sources such as the Internet of Things, social networking
websites, and scientific research is increasing at an exponential rate [1]. The scale of data centers has
gradually extended. Meanwhile, an increasing number of data are being transmitted in data center
networks, and traffic exchange among the servers in data centers has also been growing fast. The most
direct result may be low utilization ratio, congestion problems, service latency, and even DDOS
attacks [2]. Data centers are always interconnected through wide area networks [3]. When traditional
routing protocols are used in data center networks, flows are forced to preempt the shortest path to
be routed and forwarded, which might lead the shortest path link to be under full load while some
new flows are still competing for it, and other links are under low load. Without shunting for flows,
the link would easily be congested and unable to provide normal network services. The most direct,
but expensive solution, is to remold and upgrade the network. However, on account of the scale,
complexity, and heterogeneity of current computer networks, traditional approaches to configuring the
network devices, monitoring and optimizing network performance, identifying and solving network
problems, and planning network growth would become nearly impossible and inefficient [4].

Future Internet 2018, 10, 64; doi:10.3390/fi10070064 www.mdpi.com/journal/futureinternet


Future Internet 2018, 10, x FOR PEER REVIEW 2 of 12

Future Internet 2018,


performance, 10, 64
identifying and solving network problems, and planning network growth would 2 of 12

become nearly impossible and inefficient [4].


Software-defined networking(SDN)
Software-defined networking (SDN)isisone oneofofthethe most
most notable
notable forms
forms of computer
of computer networking.
networking. It is
Itreceiving
is receiving considerable attention from academic researchers, industry
considerable attention from academic researchers, industry researchers, network operators, researchers, network
operators,
and some and largesome large and medium-sized
and medium-sized networkingnetworking
enterprisesenterprises
[5]. SDN is[5]. SDN is considered
considered as a promising as a
promising modality to rearchitect our traditional network [6]. The core
modality to rearchitect our traditional network [6]. The core idea of SDN is to decouple the control idea of SDN is to decouple
the
planecontrol
from plane
the data from the data
plane plane to
to achieve achieve
flexible flexiblemanagement,
network network management, efficientoperation,
efficient network network
operation,
and low-cost andmaintenance
low-cost maintenance through software
through software programming programming [7]. Specifically,
[7]. Specifically, infrastructure
infrastructure devices
devices
merely execute packet forwarding depending on the rules installed by the SDN controller controller
merely execute packet forwarding depending on the rules installed by the SDN [8]. In the
[8]. In the
control control
plane, the plane, the SDN controller
SDN controller can oversee cantheoversee
topology theoftopology of the underlying
the underlying network and network
provide andan
provide
agile andan agile platform
efficient and efficient
for theplatform
application for plane
the application
to implement plane to implement
various variousInnetwork
network services. this new
services.
network In this newinnovative
paradigm, network paradigm,
solutions forinnovative
realizingsolutions
specific for
andrealizing specific and
flexible services can beflexible services
implemented
can be implemented
quickly and efficiently quickly
in theand formefficiently
of software in the
andform of software
deployed in realand deployed
networks withinreal-time
real networks
traffic.
with
In addition, this paradigm allows for the logically centralized control and managementcontrol
real-time traffic. In addition, this paradigm allows for the logically centralized and
of network
management of network devices in the data plane in accordance
devices in the data plane in accordance with a global and simultaneous network view and real-timewith a global and simultaneous
network
network view and real-time
information. Compared networkwithinformation. Comparedit with
traditional networks, is muchtraditional
easier tonetworks,
develop and it isdeploy
much
easier to develop
applications in SDN [9]. and deploy applications in SDN [9].
This
This paper concentratedon
paper concentrated onthe
theproblem
problem of of
datadata center
center traffic
traffic management
management and attempted
and attempted to
to avoid
avoid congestion
congestion to make to make
full use fullofuse
theof the bandwidth
bandwidth resources.
resources. We proposed
We proposed a newa solution
new solutionbasedbasedon SDNon
SDN for traffic engineering in data center networks by developing
for traffic engineering in data center networks by developing two modules on top of an open sourcetwo modules on top of an open
source SDN controller
SDN controller called Floodlight.
called Floodlight. The trafficTheamong
traffic the
amongdatathe data can
centers centers can be classified
be classified into
into different
different types. Different strategies are adopted upon the real-time
types. Different strategies are adopted upon the real-time changes of the link states. The dynamic changes of the link states. The
dynamic traffic scheduling can take full advantage of the network
traffic scheduling can take full advantage of the network resources better. Given the prolonged resources better. Given the
prolonged
occupationoccupation
of the bandwidthof theby bandwidth by malicious
malicious flows, we proposedflows,anwe proposed
effective an effective
congestion control congestion
algorithm
control algorithm by adopting a multilevel feedback queue mechanism.
by adopting a multilevel feedback queue mechanism. Considering a large-scale IP network with Considering a large-scale IP
network with multiple data centers, the basic components are shown
multiple data centers, the basic components are shown in Figure 1a. With traditional routing protocol,in Figure 1a. With traditional
routing
all flowsprotocol,
from S1 all andflows
S2 tofromD wouldS1 and S2 to Dorwould
traverse congest traverse
paths (A, or congest
B, C) withoutpaths using
(A, B, (A, C) without
D, E, C).
using (A, D, E, C). In contrast, the scheme DSCSD in this paper can route
In contrast, the scheme DSCSD in this paper can route flows from S1 and S2 to D to select different flows from S 1 and S2 to D to

select
paths different
accordingpaths according
to their types and to their types and
the real-time theinformation
link real-time link information
(Figure 1b), and(Figure 1b), and the
the congestion can
congestion can also
also be well controlled. be well controlled.

source
flow
B router B
S1 A C S1 A C
D D
D E D E
S2 S2
destination
(a) (b)
Figure
Figure 1.1.Flows thatthat
Flows preempt the shortest
preempt path (a)
the shortest pathcan(a)becan
shunted into twointo
be shunted different paths (b) paths
two different with
DSCSD.
(b) with DSCSD.

The main contributions of this paper are as follows. First, we proposed a new approach for
The main contributions of this paper are as follows. First, we proposed a new approach for traffic
traffic scheduling that could route a newly arrived flow based on the real-time link information, and
scheduling that could route a newly arrived flow based on the real-time link information, and also
also dynamically schedule flows on the link. Better than traditional approaches and the SDN-based
dynamically schedule flows on the link. Better than traditional approaches and the SDN-based scheme
scheme with threshold value, we could improve the efficiency of the link utilizing the shortest paths.
with threshold value, we could improve the efficiency of the link utilizing the shortest paths. Second,
Second, we innovatively adopted a multilevel feedback queue mechanism of congestion control
we innovatively adopted a multilevel feedback queue mechanism of congestion control suitable for
suitable for different types of flows and could realize anomaly detection by preventing malicious
different types of flows and could realize anomaly detection by preventing malicious flows from
flows from occupying the bandwidth for a long time.
occupying the bandwidth for a long time.
The remainder of the paper is organized as follows: In Section 2, we review the traditional
The remainder of the paper is organized as follows: In Section 2, we review the traditional network
network and SDN used in data centers. Then, we elaborate on the design and implementation of our
and SDN used in data centers. Then, we elaborate on the design and implementation of our proposed
proposed scheme, DSCSD (Dynamic Scheduling and Congestion control across data centers based
scheme, DSCSD (Dynamic Scheduling and Congestion control across data centers based on SDN) in
Future Internet 2018, 10, 64 3 of 12

Section 3. The performance evaluation of DSCSD is presented in Section 4. Finally, Section 5 concludes
the paper.

2. Related Work
This section reviews recent studies on traffic engineering in data centers which served as our
research background and theoretical foundation. We discussed this from two aspects including a data
center based on a traditional network and a data center based on SDN as well as its challenges. The first
aspect explains the development of distributed data centers and the limitations of the traditional
network protocols used in current data centers. These limitations exactly prove the demand of our
proposed research. The other aspect shows some recent research progress of SDN-based data centers.

2.1. Data Center Based on Traditional Network


In the traditional data center model, the traffic is mainly generated between the server and the
client, with a low proportion of east–west traffic among the data centers [10]. With the extensive use of
the Internet and the integration of the mobile network, Internet of Vehicle [11], cloud computing, big
data, and other new generations of network technology and development have come into being to
deal with massive-scale data and large-scale distributed data centers, bringing about the accelerated
growth of east–west traffic exchanged among servers, for example, the Google File System (GFS) [12],
Hadoop Distributed File System (HDFS) [13], and Google framework MapReduce [14]. However,
when the traditionally shortest path routing protocols in current data centers are used, congestion will
frequently occur in the shortest path link, and it may further reduce the quality of network services
due to the long delay and low throughput.
Traffic scheduling and congestion control are important technologies to maintain network capacity
and improve network efficiency. Traditional networks have some congenital defects. The main
disadvantages can be summarized as follows: First, there is no global coordinative optimization.
Each node independently implements the traffic control strategy, which can only achieve the local
optimum. Moreover, there is no dynamic and self-adaptive adjustment. The predefined strategies in
routers cannot meet the frequently changing demands of business flows. In addition, traditional
networks find it difficult to achieve the effective and accurate control of every network device.
The configurations of network devices are numerous and diverse where the commands are so
complicated that it is very difficult to find the network errors caused by configurations. Consequently,
it is of great urgency to figure out how to effectively manage and dominate network traffic, which has
pushed network architects to take advantage of SDN to address these problems in data centers.

2.2. Data Center Based on SDN


A higher-level of visibility and a fine-grained control of the entire network can be achieved in
the SDN paradigm. The SDN controller is able to program infrastructural forwarding devices in the
data plane to monitor and manage network packets passing through these devices in a fine-grained
way. Therefore, we can use the SDN controller to implement the periodic collection of these statistics.
Furthermore, we can also obtain a centralized view of the network status for the SDN applications via
open APIs and notify the up-level applications of a change in a real-time network [15].
Data centers are typically composed of thousands of high-speed links such as the 10 G Ethernet.
In conventional packet capture mechanisms like switch port analyzer and port mirroring, infeasibility
is completely reflected from a cost and scale perspective because a stupendous number of physical
ports might be used. Adopting the SDN-based approaches to data center networks has become the
focus of current research [1]. Tavakoli et al. [16] first applied SDN to data center networks by using
NOX (the first SDN controller) to efficiently actualize addressing and routing of the typical data center
networks VL2 and PortLand. Tootoonchian [17] proposed HyperFlow, a new distributed control plan
for OpenFlow. Multiple controllers cooperate with each other to make up for the low scalability of
a single controller while the advantage of centralized management is retained. The cooperation of
Future Internet 2018, 10, 64 4 of 12

distributed controllers can realize the expansion of the network and conveniently manage network
devices [18–20]. Koponen et al. [21] presented Onix, on which the SDN control plane could be
implemented as a distributed system. Benson et al. [22] proposed MicroTE, a fine-grained routing
approach for data centers where he found a lack of multi-path routing, plus an overall view of
workloads and routing strategies in data center networks based on a traditional network. Hindman
et al. [23] presented Mesos, a platform for fine-grained resource sharing in data centers. Curtis [24]
proposed Mahout to handle elephant flows (i.e., large flows) while large flows in Socket buffers
were detected. Due to the burst of the traffic and the failure of the equipment, congestion and
failure frequently occur. Large flows and small flows are always mingled in data center networks.
Kanagavelu et al. [25] proposed a flow-based edge–to–edge rerouting scheme. When congestions
appeared, it focused on rerouting the large flows to alternative links. The reason for this mechanism is
that shifting these short-lived small flows among links would additionally increase the overhead and
latency. Khurshid et al. [26] provided a layer between a software-defined networking controller and
network devices called VeriFlow. It is a network debugging tool to find faulty rules issued by SDN
applications and anomalous network behavior. FP Tso [27] presented the Baatdaat flow scheduling
algorithm using spare data center network capacity to mitigate the performance degradation of heavily
utilized links. This could ensure real-time dynamic scheduling, which could avoid congestion caused
by instantaneous flows. Li et al. [28] proposed a traffic scheduling solution based on the fuzzy synthetic
evaluation mechanism, and the path could be dynamically adjusted according to the overall view of
the network.
In summary, most of the current active approaches solving traffic engineering in data center
networks based on SDN have two alternatives. The first common approach is based on a single
centralized controller, with the problem of scalability and reliability. The other method is based on
multiple controllers, but the cooperation of multiple controllers and the consistent update mechanism
has not been well addressed thus far [29]. Our proposed scheme is an attempt that first applies the
multilevel feedback queue mechanism to traffic scheduling and congestion control across data centers
based on SDN, and can also provide a new solution to congestion caused by the prolonged occupation
of malicious flows.

3. The Design and Implementation of DSCSD


Usually, the links among data centers are highly probable for reuse, and the burst of instantaneous
traffic leads to the instability and dynamicity of the network. Traffic scheduling and congestion control
among multiple data centers can be well addressed with DSCSD by taking advantage of the flexibility
and agility of SDN. Data centers are always interconnected through wide area networks. The network
traffic is not constant at all times and has an unbalanced distribution. The traffic during the peak
hours could increase by twice that of the average link load. However, the traditional routing protocols
route and forward flows abide by the shortest paths algorithm, without shunting away flows to
balance the link load. To satisfy the requirement of bandwidth during peak hours, we have no choice
but to purchase 2–3× bandwidth as well as upgrade to large-scale routing equipment. As a result,
the cost of network operation, administration, and maintenance are increased sharply, and also causes
the waste of bandwidth in ordinary times. If we can meet the requirements of network services,
the average link utilization rate of wide area networks can only achieve 30–40% [30]. Saying that some
malicious flows extend the occupation of links, congestion might bring about interruptions in the
transmission of their services [31–34]. In this paper, we propose DSCSD, a scheme of dynamic traffic
scheduling and congestion control with a multilevel feedback queue mechanism that can address the
problems above-mentioned to some extent. The following sections explain the design in detail and its
implementation process.
Future Internet 2018, 10, 64 5 of 12
Future Internet 2018, 10, x FOR PEER REVIEW 5 of 12

3.1. System Model


Our scheme
schemewas wasdesigned
designed andandinnovated
innovatedon the
onbasis
the of dataofcenter
basis datanetworking. In the following
center networking. In the
following presentation, we briefly describe the components of the system model in Figure 2. (i)center
presentation, we briefly describe the components of the system model in Figure 2. (i) Data Data
servers.servers.
center PC1, PC2,
PC1,and PC3
PC2, andarePC3
three
areindividual data centers,
three individual and theand
data centers, fs from
the PC1 to PC3
fs from PC1takes priority
to PC3 takes
over theover
priority flows from
the PC2
flows to PC3.
from PC2 to (ii)PC3.
OpenFlow switches.
(ii) OpenFlow With the
switches. fivethe
With switches, two distinct
five switches, paths
two distinct
are formed, and the path (S1, S4, S5) is the shortest path leaving (S1, S2, S3, S4)
paths are formed, and the path (S1, S4, S5) is the shortest path leaving (S1, S2, S3, S4) the second the second shortest.
(iii) SDN (iii)
shortest. controller. We selected
SDN controller. Wethe open source
selected OpenFlow
the open controller Floodlight
source OpenFlow controller as the centralized
Floodlight as the
controller and
centralized it can be
controller and used to be
it can control
used the behavior
to control theofbehavior
OpenFlow switches by
of OpenFlow adding,byupdating,
switches adding,
and deleting
updating, andflow table flow
deleting entries in the
table switches.
entries in the switches.

SDN Controller

S2 S3
low priority
traffic
S1
PC2
S5
PC3
high priority
S4
traffic
PC1

Figure
Figure 2.
2. System
System model.
model.

3.2. Dynamic Traffic Scheduling


3.2. Dynamic Traffic Scheduling
We classified the flows among multiple data centers into different types. As for data
We classified the flows among multiple data centers into different types. As for data duplication
duplication flows, we set a lower priority for them, while the other flows of high quality requirement
flows, we set a lower priority for them, while the other flows of high quality requirement had a higher
had a higher priority. The higher priority flows traverse the shortest path, and the lower priority can
priority. The higher priority flows traverse the shortest path, and the lower priority can select a path
select a path depending on the real-time link states. The concrete link states on the shortest path are
depending on the real-time link states. The concrete link states on the shortest path are categorized
categorized into the following four cases.
into the following four cases.
State 1: The time when a low priority flow arrives. In this state, we still have several different
State 1: The time when a low priority flow arrives. In this state, we still have several different
possibilities. First, if bandwidth remains on the shortest path (S1, S4, S5), then it will directly select
possibilities. First, if bandwidth remains on the shortest path (S1, S4, S5), then it will directly select
this path. If the shortest path (S1, S4, S5) is fully occupied by a high priority flow, then it will have no
this path. If the shortest path (S1, S4, S5) is fully occupied by a high priority flow, then it will have no
choice but be transmitted on the second shortest path (S1, S2, S3, S5). The last possibility is that this
choice but be transmitted on the second shortest path (S1, S2, S3, S5). The last possibility is that this
flow should go into the congestion control with multilevel feedback queues, given all paths have no
flow should go into the congestion control with multilevel feedback queues, given all paths have no
bandwidth to be used.
bandwidth to be used.
State 2: The time when a high priority flow arrives. In this state, we also have several different
State 2: The time when a high priority flow arrives. In this state, we also have several different
possibilities. First, if bandwidth remains on the shortest path (S1, S4, S5), then it will directly select
possibilities. First, if bandwidth remains on the shortest path (S1, S4, S5), then it will directly select
this path. If the shortest path (S1, S4, S5) is fully occupied by a high priority flow, then it will have no
this path. If the shortest path (S1, S4, S5) is fully occupied by a high priority flow, then it will have no
choice but be transmitted on the second shortest path (S1, S2, S3, S5). The last possibility is that this
choice but be transmitted on the second shortest path (S1, S2, S3, S5). The last possibility is that this
flow should go into the congestion control with multilevel feedback queues, given all paths have no
flow should go into the congestion control with multilevel feedback queues, given all paths have no
bandwidth to be used.
bandwidth to be used.
State 3: The time when a high priority flow is transmitting on the link. Due to the high priority,
State 3: The time when a high priority flow is transmitting on the link. Due to the high priority,
we do nothing with it.
we do nothing with it.
State 4: The time when a low priority flow is transmitting on the link. From state 1, we can learn
State 4: The time when a low priority flow is transmitting on the link. From state 1, we can learn
that the low priority flow can select any one of the two paths adapting to data fluctuation. Therefore,
that the low priority flow can select any one of the two paths adapting to data fluctuation. Therefore,
if there is no available bandwidth on the shortest path link when a new flow with high priority
if there is no available bandwidth on the shortest path link when a new flow with high priority arrives
arrives at this point, it will vacate the shortest path link and be scheduled to the second shortest path
at this point, it will vacate the shortest path link and be scheduled to the second shortest path link.
link. Otherwise, if the new flow has an equally low priority and no subsequent low priority flow
Otherwise, if the new flow has an equally low priority and no subsequent low priority flow needs
needs transmitting, the shortest path is allotted to the newly arrived flow.
transmitting, the shortest path is allotted to the newly arrived flow.
According to the aforementioned analysis of the link states, we can realize dynamic traffic
scheduling to improve the utilization of bandwidth resources.
Future Internet 2018, 10, 64 6 of 12

According to the aforementioned analysis of the link states, we can realize dynamic traffic
scheduling to improve the utilization of bandwidth resources.

3.3. Congestion Control with Multilevel Feedback Queues


As part of DSCSD, dynamic traffic scheduling has an obvious effect on the classification and
diversion of flow. However, when malicious flows exist, congestion also might occur in the shortest
path link due to the long-term occupation of limited link resources and the burst of network
instantaneous traffic. The algorithm of congestion control with multilevel feedback queues not only
provides a queuing service for congested flows, but also settles the problem of vicious prolonged
occupation. The specific description is shown as follows.
In the initial phase of the scheme, we define two multilevel feedback queues to store the flows
waiting to be scheduled. One is the multilevel feedback queue of low priority and the other is the
multilevel feedback queue of high priority. Then, in each feedback queue, we define three sub-queues,
and give the highest priority to flow waiting queue 1, the second highest priority to flow waiting queue
2, and the lowest priority to flow waiting queue 3. As is shown in Figure 3, the transmission time of
these flow waiting queues are different after being scheduled. Sub-queue 1 has the time t, sub-queue 2
has the time 2t that is twice the time of sub-queue 1, and sub-queue 3 has the time 3t that is triple the
time of sub-queue 1.
For the pseudo codes of the algorithm, see Algorithm 1.

Algorithm 1: The Algorithm of Multilevel Feedback Queue


Input: G: topology of data center network;
Factive : set of active flows;
Fsuspend : set of suspended flows;
Fnew : a new flow;
Ffirst : a flow from the first of the Fsuspend .
Output: {<e.state, e.path>}: scheduling state and path selection of each flow in G. When a new
flow arrives
1 if (e.path = IDLEPATH) then
2 Factive ← Factive + Fnew ;
3 end
4 else Fsuspend ← Fsuspend + Fnew ;
5 while Fsuspend 6= ∅) do
6 if (e.path
 = IDLEPATH) then
7 if Fsuspend1 6= ∅) then
8 Select the flow at the first of Fsuspend1 ;
9 Fsuspend1 ← Fsuspend1 − Ff irst ;
10 Factive ← Factive + Ff irst ;
11 Transmission time of the Ff irst is t;
12 end 
13 else if Fsuspend2 6= ∅) then
14 Select the flow at the first of Fsuspend2 ;
15 Fsuspend2 ← Fsuspend2 − Ff irst ;
16 Factive ← Factive + Ff irst ;
17 Transitssion time of the Ff irst is 2t;
18 end
19 else
20 Select the flow at the first of Fsuspend3 ;
21 Fsuspend3 ← Fsuspend3 − Ff irst ;
22 Factive ← Factive + Ff irst ;
23 Transit time of the Ff irst is 3t;
24 end
25 return {<e.state, e.path>};
suspend 3 suspend 3 first

22 Factive  Factive + F first ;


23 Transit time of the F first is 3t;
24 end
Future Internet 2018, 10, 64 7 of 12
25 return {<e.state, e.path>};

t Link
flow waiting queue 1 transmission

2t Link
flow waiting queue 2 transmission

flow waiting queue 3 3t Link


transmission

Figure 3. Multilevel feedback queue.

4.
4. Experiment
Experiment Results
Results and
and Performance
Performance Analysis
Analysis
The
The SDNSDN controller
controller selected
selected in in our
our scheme
scheme is is aa free
free open
open source
source Floodlight
Floodlight thatthat runsruns in in the
the
eclipse environment on the Ubuntu system. The virtual switch uses
eclipse environment on the Ubuntu system. The virtual switch uses open source Open vSwitch 2.3.0. open source Open vSwitch 2.3.0.
The
The virtual network was
virtual network was created
created byby Mininet
Mininet in in Ubuntu
Ubuntu 14.04.
14.04.
We
We implemented DSCSD on top of the Floodlight v1.2 controller
implemented DSCSD on top of the Floodlight v1.2 controller in in our
our simulation
simulation experiments.
experiments.
Floodlight
Floodlight is is aa free source, and
free source, and modules
modules can can be randomly added
be randomly added and and deleted,
deleted, so so that
that it provides
it provides
much
much more convenience for our test. In our experimental setup, the virtual switch was created by
more convenience for our test. In our experimental setup, the virtual switch was created by
Open
Open vSwitch.
vSwitch. The The Floodlight
Floodlight was was chosen
chosen as as the
the SDN
SDN controller.
controller. We We added
added two modules named
two modules named
initflowtable
initflowtable and and trafficmanage
trafficmanage on Floodlight. The
on Floodlight. The module
module initflowable
initflowable was was used
used to to initialize
initialize the the
variable with a value and install some flow tables of preprocessing. The
variable with a value and install some flow tables of preprocessing. The main function of our scheme main function of our scheme
was the module
was the module trafficmanage
trafficmanage that that could
could monitor
monitor the the Packet-in
Packet-in messages
messages and and thethe Flow-removed
Flow-removed
messages
messages of of the
the OpenFlow
OpenFlow protocolprotocol and
and recode
recode the the installed
installed flowflow tables
tables asas well as link
well as link information
information
including congestion and bandwidth. When a new flow arrives
including congestion and bandwidth. When a new flow arrives or a flow needs to be scheduled, or a flow needs to be scheduled, the
module
the module trafficmanage
trafficmanage can can
achieve accurate
achieve accurate andandreal-time detection
real-time detectionandandcontrol.
control.
As
As shown in Figure 4, we used several virtual hosts to simulate three datadata
shown in Figure 4, we used several virtual hosts to simulate three centers;
centers; h1 and h1 h2
and wereh2
were the servers of two individual data centers; c1, c2, c3, c4, and c5
the servers of two individual data centers; c1, c2, c3, c4, and c5 were collectively used as another data were collectively used as
another data center.
center. Certainly, weCertainly,
should assign we should
virtualassign virtual and
IP addresses IP addresses
MAC addressesand MAC for addresses
them. We also for them.
used
We also used the flows from any one of h1 and h2 to any one of c1, c2,
the flows from any one of h1 and h2 to any one of c1, c2, c3, c4, and c5 to simulate the interconnected c3, c4, and c5 to simulate the
interconnected flows among the data centers. In accordance with the
flows among the data centers. In accordance with the system model in Figure 2, the path (S1, S4, S5) system model in Figure 2, the
path
was the(S1,shortest
S4, S5)path wasleaving
the shortest
(S1, S2,path leaving
S3, S4) (S1, S2,
the second S3, S4)Here,
shortest. the second shortest.two
we conducted Here,
testswe to
conducted two tests to verify the effectiveness
verify the effectiveness and performance of DSCSD. and performance of DSCSD.
Test
Test 1:1: Verify
Verifythe theeffectiveness
effectivenessofofDSCSD. DSCSD.We Weusedused Mininet
Mininet to to
setset
a 4aM-bandwidth
4 M-bandwidth for for
bothboththe
the shortest path and the second shortest path, and we also utilized
shortest path and the second shortest path, and we also utilized the tool iperf to simulate four flowsthe tool iperf to simulate four
flows
from h1 from
to c1h1and
to c1c2,andandc2,from
andh2from
to c3h2andto c3 c4.and
Eachc4.link
Eachwas linkequipped
was equippedwith the with the requirement
requirement of a 2
of a 2 M-bandwidth.
M-bandwidth. We discussed
We discussed and analyzed
and analyzed the performance
the performance of the traditional
of the traditional networknetwork
and DSCSD. and
DSCSD. Then, we chose a span under the selfsame condition to test
Then, we chose a span under the selfsame condition to test the loss tolerance and real-time bandwidththe loss tolerance and real-time
bandwidth
of these flows. of Some
these of flows.
theseSome of presented
data are these datainare presented
Table 1. Here, in weTable
adopted 1. Here, we adopted
four symbolic four
notations.
symbolic
The notation notations.
“h1–1” The means notation “h1–1”
the traffic frommeans
h1 to c1.theBy traffic from“h1–2”
analogy, h1 to c1. By analogy,
means the traffic“h1–2”
from h1 means
to c2;
the traffic from h1 to c2; “h2–c3” means the traffic from h2 to c3; and “h2–c4”
“h2–c3” means the traffic from h2 to c3; and “h2–c4” means the traffic from h2 to c4. We also calculated means the traffic from
h2
theto c4. Welink
overall alsoutilization
calculatedofthe overall
both caseslink
as isutilization
shown in Figureof both5.cases as is shown in Figure 5.
In Test 1, it held that the loss tolerance of the four flows in a traditional network was high, with
two of them close to 100%. As for the overall link utilization, the average link utilization of our
Future Internet 2018, 10, x FOR PEER REVIEW 8 of 12

scheme
Future was2018,
Internet kept atFOR
10, x 97%, while
PEER that
of traditional network was just 48%. For a comparison,
REVIEW 8 ofwe
12
verified the high improvement of link utilization by using DSCSD.
scheme was kept at 97%, while that of traditional network was just 48%. For a comparison, we
Future Internet
verified high10,
the 2018, 64
improvement of link utilization by using DSCSD. 8 of 12
Floodlight

Floodlight

S2 S3 C1

S2 S3 C1 C2
S1
h2
S5 C3
S1 C2
h2 S4 C4
S5 C3
h1
S4 C5 C4

h1
Figure 4. Experimental topology. C5

Figure
Table 4.
4. Experimental
1. Real-time
Figure bandwidth
Experimental topology.
comparison.
topology.

bps Traditional
Table Network bandwidth comparison. DSCSD
Table 1.
1. Real-time
Real-time bandwidth comparison.
Time h1–1 h1–c2 h2–c3 h2–c4 h2–c4 h2–c4 h2–c4 h2–c4
bpss
0–1 106 kTraditional
1.87 M Network
1.91 M 25.3 k 1.95 M 1.94DSCSD M 1.95 M 1.94 M
bps Traditional Network DSCSD
Time
1–2 s h1–1
11.8 k h1–c2
1.86 M h2–c3
1.81 M h2–c4
70.6 k h2–c4
1.94 M h2–c4
1.95 M h2–c4
1.94 M h2–c4
Time h1–1 h1–c2 h2–c3 h2–c4 h2–c4 h2–c4 h2–c4 h2–c4 M
1.94
0–1
2–3 ss 106
11.8 k 1.87
k 1.89 M 1.91
M 1.89 M 25.3
M 35.3 k 1.95
k 1.94 M 1.94
M 1.94 M 1.95
1.94MM 1.95 M
1.94MM 1.94
1.95 M
M
0–1 s 106 k 1.87 M 1.91 M 25.3 k 1.95 M 1.94 M
1–2
3–4 s
1–2 ss
11.8
153 k 1.86
11.8kk 1.92
M
1.86 M
1.81
M 1.80
M
M 70.6
1.81 M 11.8 k 1.94
70.6 kk 1.95
M
M 1.95
1.94 M 1.94MM
1.95
1.94
1.94MM
M 1.94 M 1.94
1.94MM
1.94 M
2–3
2–3sss
4–5 11.8 k 1.89
11.8kk 1.75
35.3 M
1.89 M 1.89
M 1.94 M
1.89 M 35.3
M 176 k 1.94
35.3 kk 1.94 M
1.94 M 1.94
M 1.94 M 1.94
1.95MM
1.95MM 1.94 M 1.95
1.94MM
1.95 M
3–4ss
3–4 153kk 1.92
153 1.92MM 1.801.80 M
M 11.8
11.8 k
k 1.95
1.95 M
M 1.94 M
1.94 M 1.94
1.94 MM 1.94
1.94 MM
5–6
4–5 ss 82.3
35.3kk 1.94
1.75 M
M 1.89 M 82.3
1.94 M 176 kk 1.82 M 1.95
1.94 M 1.94MM 1.95
1.94MM 1.94MM
1.94
4–5
5–6sss
6–7 35.3
82.3kkk 1.75
35.3 1.94M
1.95 M 1.94
M 1.89 M
1.88 M 176
M 82.3 k
23.5 kk 1.94
1.94 M
M 1.95
1.82 M 1.94MM
1.94 1.95
1.87MM
M 1.94 M 1.94
1.95MM
1.94 M
5–6
6–7sss
7–8 82.3
35.3kkk 1.94
23.5 1.95M
1.94 M 1.89
M 1.88 M
1.88 M 82.3
M 23.5 k
58.8 kk 1.82 M
M 1.94
1.94 M
1.95 1.94MM
1.94 1.94
1.93MM
M 1.87 M 1.94
1.94MM
1.95 M
7–8ss
6–7 23.5kk 1.95
35.3 1.94MM 1.881.88 M
M 23.5
58.8 k
k 1.94
1.95 M
M 1.94 M
1.94 M 1.93
1.87 MM 1.94
1.95 MM
8–9
8–9 ss 11.8
11.8kk 1.89
1.89 M
M 1.92 M 23.5
1.92 M 23.5 kk 1.95 M 1.95
1.95 M 1.95MM 1.94
1.94MM 1.94MM
1.94
7–8
9–10ss
9–10 23.5
23.5kkk 1.94
23.5 1.91M
1.91 M 1.88
M 1.90 M
1.90 M 58.8
M 11.8 k
11.8 kk 1.95
1.94 M
M 1.94
1.94 M 1.95MM
1.95 1.93
1.94MM
M 1.94 M 1.94
1.93MM
1.93 M
8–9 s 11.8 k 1.89 M 1.92 M 23.5 k 1.95 M 1.95 M 1.94 M 1.94 M
9–10 s 23.5 k 1.91 M 1.90 M 11.8 k 1.94 M 1.95 M 1.94 M 1.93 M

Figure 5.
Figure 5. Overall
Overall link
link utilization
utilization comparison.
comparison.

Test 2: Congestion control Figure 5. Overall


with link utilization
multilevel feedback comparison.
queues. In this test, we still used the
In Test 1, it held that the loss tolerance of the four flows in a traditional network was high,
simulation of Test 1. The difference was that we were only interested in the shortest path (S1, S4, S5)
withTest
two of them close tocontrol
100%. As for multilevel
the overall link utilization, the In
average linkweutilization of our
and set a 2:5 Congestion
M-bandwidth. The time with t was 20 s. Then,feedback queues.
we let c5 send this
UDPtest,
flows tostill
h1 used
with thea 3
scheme was
simulation ofkept
Testat
1.97%,
The while that
difference of
wastraditional
that we network
were onlywas just
interested48%.in For
the a comparison,
shortest path we
(S1,verified
S4, S5)
M-bandwidth consistently, while c1, c2, c3, and c4 sent UDP flows to h2 with a 1 M-bandwidth.
the high
and setweaimprovement
5 M-bandwidth. of link utilization by using DSCSD.
Here, divided Test 2 intoThetwotime
cases:t was 20 s. bandwidth
real-time Then, we let c5 send
without UDP flows
congestion to h1
control with and
queues a 3
Test 2: Congestion
M-bandwidth consistently,control
while with
c1, multilevel
c2, c3, and feedback
c4 sent UDP queues.
flows In h2
to thiswith
test,a we
1 still used the
M-bandwidth.
real-time bandwidth with congestion control queues, then we discussed and analyzed the
simulation
Here, of TestTest
we divided 1. The
2 intodifference
two cases: was that webandwidth
real-time were onlywithout
interested in the shortest
congestion controlpath
queues(S1,and
S4,
S5) and set
real-time a 5 M-bandwidth.
bandwidth The timecontrol
with congestion t was 20 s. Then,
queues, we we
then let c5 send UDP
discussed andflows to h1 with
analyzed the
a 3 M-bandwidth consistently, while c1, c2, c3, and c4 sent UDP flows to h2 with a 1 M-bandwidth.
Future Internet 2018, 10, 64 9 of 12

Future
Future Internet 2018,
2018, 10, x
x FOR PEER REVIEW
Here, Internet
we divided 10,Test
FOR PEERtwo
2 into cases: real-time bandwidth without congestion control queues99 and
REVIEW of
of 12
12
real-time bandwidth with congestion control queues, then we discussed and analyzed the effectiveness
effectiveness
effectiveness of
of congestion
congestion control in
in these two
two cases. The
The real-time bandwidths inin two
two cases are
of congestion control in thesecontrol
two cases. these cases.
The real-time real-time
bandwidths in twobandwidths cases
cases are respectively are
shown
respectively
respectively shown
shown in
in Figures
Figures 6
6 and
and 7.
7.
in Figures 6 and 7.

Figure
Figure 6.
6. Real-time
Real-time bandwidth
bandwidth without
without congestion
congestion control
control queues.
queues.

Figure 7. Real-time
Figure 7. Real-time bandwidth
bandwidth with
with congestion
congestion control queues.
control queues.
queues.

At
At 10
At 10 s,
10 s, the
s, the path
the path (S1,
path (S1,S4,
(S1, S4,S5)
S4, S5)has
S5) hasbeen
has beensaturated
been saturatedwith
saturated withthe
with the
the flows
flows
flows from
from
fromc5, c1,
c5,c5, and
c1,c1,
and c2. At
At 20
c2. c2.
and Ats,
20 s,20we
wes,
received
received the
the request
request of
of the
the normal
normal flows
flows from
from c3
c3 and
and c4.
c4. At
At this
this time,
time, if
if
we received the request of the normal flows from c3 and c4. At this time, if we do not use congestionwe
we do
do not
not use
use congestion
congestion
control queues
control queues
control queues (case(case 1),
(case 1), the
1), the flow
the flow from
flow from
from c5 c5 with
c5 with high
with high priority
high priority directly
priority directly affects
directly affects the
affects the quality
the quality
quality ofof
of thethe
the
transmission
transmission of
transmission ofofother other
other flows,
flows,
flows, whichwhich
whichcan becan be observed
canobserved
be observed
in Figurein Figure
in 6.Figure 6. In
6. In we
In contrast, contrast,
contrast,
noticedwewe noticed
the noticed
comparison the
the
comparison
comparison
in Figure 7. Ifinwe
in Figure
Figure 7.congestion
used7. If we
If we used
used control
congestion
congestion control
control
queues (casequeues
queues
2), when(case
(case we 2),received
2), when we
when we received
received
the the
requestthe request
request
of normal
of normal
of normal
flows flows
fromflows from
c3 andfrom c3 and
c3 and
c4, flows fromc4, flows
c4, c5
flows
were from
from c5 were
c5 wereinto
scheduled scheduled
scheduled into the congestion
into thecontrol
the congestion congestion
queues.control
control queues.
queues.
Considering
Considering
Considering the
the bandwidth
bandwidth remained
remained on
on the
the path,
path, the
the flow
flow from
from c5
c5 will
will be
be
the bandwidth remained on the path, the flow from c5 will be rescheduled to the path to be transmitted. rescheduled
rescheduled to
to the
the path
path
to
to be
be transmitted.
transmitted. With
With this
this mechanism,
mechanism, we
we can
can provide
provide a
a new
new solution
solution
With this mechanism, we can provide a new solution to control the congestion caused by the prolonged to
to control
control the
the congestion
congestion
caused
caused by
by the
occupation the prolonged
prolongedflows.
of malicious occupation
occupation of of malicious
malicious flows.
flows.
As
As shown
shown in
in Figure
Figure 8,
8, we
we
As shown in Figure 8, we demonstrated the
demonstrated
demonstrated the comparison
comparison of
the comparison of the link
the link
of the
delay
link delay
delay in in the
in the
the twotwo
two
aforementioned
aforementioned cases.
aforementioned
cases.
cases.We We
Wecan can
can easily
easily
easily draw
draw
draw the
the the
conclusion
conclusion
conclusion
that
that that
by using
by using
by using the congestion
the congestion
the congestion
control
controlcontrol
queue,
queue,
queue, the
the congestion
congestion caused
caused by
by malicious
malicious flows can be
be well
well addressed with aa relative reduction in
the congestion caused by malicious flows canflows canaddressed
be well addressed with reduction
with a relative relative in
reduction
link delay.in
link delay.
link delay.
Future Internet 2018, 10, 64 10 of 12

Figure 8. Comparison of the link delay.

5. Conclusions and Future Work


In this paper, we focused on the problem of traffic scheduling and congestion control across data
centers and aimed to provide an approach that could greatly improve link utilization. To realize this
goal, we designed DSCSD, a dynamic traffic scheduling and congestion control scheme across data
centers based on SDN. The moment a flow arrives, it relies on the connection of traffic parameters and
link information to select paths. Furthermore, it can achieve real-time dynamic scheduling to avoid
congestion caused by the burst of instantaneous traffic, and can also balance the link loads. Compared
with traditional approaches, the experiment and analysis had an obvious effect on the classification
and diversion of flows, thereby improving the link utilization across data centers. Better than the
SDN-based scheme with threshold value, the real-time monitoring and dynamic scheduling of the
shortest paths were fully reflected. Meanwhile, we innovatively adopted the mechanism of a multilevel
feedback queue for congestion control, which is suitable for different types of flows and can implement
anomaly detection by preventing malicious flow from chronically occupying the bandwidth.
Our proposed DSCSD scheme can be easily deployed to the existing data center networks to deal
with the low utilization of link resources such as the data centers of live video streaming platforms
and online video platforms. Although our scheme solved the traffic scheduling problem efficiently,
there are still some limitations. First, our scheme is based on SDN, but DSCSD is not applicable in
traditional network environments or a hybrid network environment. Second, more fine-grained and
flexible hierarchical control might be helpful to further enhance the experimental result. In addition,
we did not take into account the issue of energy saving in the traffic scheduling and congestion control
of data center networks. Accordingly, we would like to enrich DSCSD with energy management in
the future.

Author Contributions: D.S. conceived and designed the system model and algorithm; K.Z. was responsible for
literature retrieval and chart making; Y.F. performed the experiments, analyzed the data, and wrote the paper;
J.C. designed the research plan, conceived the algorithm model, and polished the paper.
Funding: This research was funded by the [National Natural Science Foundation of China] grant number
[61502008], the [Key Scientific Research Project of Henan Higher Education] grant number [16A520084],
the [Natural Science Foundation of Anhui Province] grant number [1508085QF132] and [the Doctoral Research
Start-Up Funds Project of Anhui University].
Conflicts of Interest: The authors declare no conflict of interest.
Future Internet 2018, 10, 64 11 of 12

References
1. Cui, L.; Yu, F.R.; Yan, Q. When big data meets software-defined networking: SDN for big data and big data
for SDN. IEEE Netw. 2016, 30, 58–65. [CrossRef]
2. Lan, Y.L.; Wang, K.; Hsu, Y.H. Dynamic load-balanced path optimization in SDN-based data center networks.
In Proceedings of the 10th International Symposium on Communication Systems, Networks and Digital
Signal Processing, Prague, Czech Republic, 20–22 July 2016; pp. 1–6.
3. Ghaffarinejad, A.; Syrotiuk, V.R. Load Balancing in a Campus Network Using Software Defined Networking.
In Proceedings of the Third GENI Research and Educational Experiment Workshop, Atlanta, GA, USA,
19–20 March 2014; pp. 75–76.
4. Xia, W.; Wen, Y.; Foh, C.; Niyato, D.; Xie, H. A Survey on Software-Defined Networking. IEEE Commun.
Surv. Tutor. 2015, 17, 27–51. [CrossRef]
5. Nunes, A.; Mendonca, M.; Nguyen, X.; Obraczka, K.; Turletti, T. A Survey of Software-Defined Networking:
Past, Present, and Future of Programmable Networks. IEEE Commun. Surv. Tutor. 2014, 16, 1617–1634.
[CrossRef]
6. Lin, P.; Bi, J.; Wang, Y. WEBridge: West–east bridge for distributed heterogeneous SDN NOSes peering.
Secur. Commun. Netw. 2015, 8, 1926–1942. [CrossRef]
7. Sezer, S.; Scott-Hayward, S.; Chouhan, P.K.; Fraser, B.; Lake, D.; Finnegan, J.; Viljoen, N.; Miller, M.; Rao, N.
Are we ready for SDN? Implementation challenges for software-defined networks. IEEE Commun. Mag.
2013, 51, 36–43. [CrossRef]
8. Mckeown, N.; Anderson, T.; Balakrishnan, H.; Parulkar, G.; Peterson, L.; Rexford, J.; Shenker, S.; Turner, J.
OpenFlow: Enabling innovation in campus networks. ACM SIGCOMM Comput. Commun. Rev. 2008,
38, 69–74. [CrossRef]
9. Kim, H.; Feamster, N. Improving network management with software defined networking.
IEEE Commun. Mag. 2013, 51, 114–119. [CrossRef]
10. Greenberg, A.; Hamilton, J.; Maltz, D.A.; Patel, P. The cost of a cloud: Research problems in data center
networks. ACM SIGCOMM Comput. Commun. Rev. 2008, 39, 68–73. [CrossRef]
11. Cheng, J.; Cheng, J.; Zhou, M.; Liu, F.; Gao, S.; Liu, C. Routing in Internet of Vehicles: A Review. IEEE Trans.
Intell. Transp. Syst. 2015, 16, 2339–2352. [CrossRef]
12. Ghemawat, S.; Gobioff, H.; Leung, S.T. The Google file system. In Proceedings of the Nineteenth ACM
Symposium on Operating Systems Principles, Bolton Landing, NY, USA, 19–22 October 2003; pp. 29–43.
13. Shvachko, K.; Kuang, H.; Radia, S.; Chansler, R. The Hadoop Distributed File System. In Proceedings of the
IEEE 26th Symposium on MASS Storage Systems and Technologies, Incline Village, NV, USA, 3–7 May 2010;
pp. 1–10.
14. Dean, J.; Ghemawat, S. MapReduce: Simplified Data Processing on Large Clusters. In Proceedings of the
6th Conference on Symposium on Opearting Systems Design & Implementation, San Francisco, CA, USA,
6–8 December 2004; pp. 137–150.
15. Ali, S.T.; Sivaraman, V.; Radford, A.; Jha, S. A Survey of Securing Networks Using Software Defined
Networking. IEEE Trans. Reliab. 2015, 64, 1–12. [CrossRef]
16. Tavakoli, A.; Casado, M.; Koponen, T.; Shenker, S. Applying NOX to the Datacenter. In Proceedings of the
Eighth ACM Workshop on Hot Topics in Networks (HotNets-VIII), New York, NY, USA, 22–23 October 2009.
17. Tootoonchian, A.; Ganjali, Y. HyperFlow: A distributed control plane for OpenFlow. In Proceedings of the
Internet Network Management Conference on Research on Enterprise Networking, San Jose, CA, USA,
27 April 2010; p. 3.
18. Yu, Y.; Lin, Y.; Zhang, J.; Zhao, Y.; Han, J.; Zheng, H.; Cui, Y.; Xiao, M.; Li, H.; Peng, Y.; et al. Field
Demonstration of Datacenter Resource Migration via Multi-Domain Software Defined Transport Networks
with Multi-Controller Collaboration. In Proceedings of the Optical Fiber Communication Conference,
San Francisco, CA, USA, 9–13 March 2014; pp. 1–3.
19. Zhang, C.; Hu, J.; Qiu, J.; Chen, Q. Reliable Output Feedback Control for T-S Fuzzy Systems with
Decentralized Event Triggering Communication and Actuator Failures. IEEE Trans. Cybern. 2017,
47, 2592–2602. [CrossRef] [PubMed]
Future Internet 2018, 10, 64 12 of 12

20. Zhang, C.; Feng, G.; Qiu, J.; Zhang, W. T-S Fuzzy-model-based Piecewise H_infinity Output Feedback
Controller Design for Networked Nonlinear Systems with Medium Access Constraint. Fuzzy Sets Syst. 2014,
248, 86–105. [CrossRef]
21. Koponen, T.; Casado, M.; Gude, N.S.; Stribling, J.; Poutievski, L.; Zhu, M.; Ramanathan, R.; Iwata, Y.;
Inoue, H.; Hama, T.; et al. Onix: A distributed control platform for large-scale production networks.
In Proceedings of the Usenix Symposium on Operating Systems Design and Implementation, Vancouver,
BC, Canada, 4–6 October 2010; pp. 351–364.
22. Benson, T.; Anand, A.; Akella, A.; Zhang, M. MicroTE: Fine grained traffic engineering for data centers.
In Proceedings of the CONEXT, Tokyo, Japan, 6–9 December 2011.
23. Hindman, B.; Konwinski, A.; Zaharia, M.; Ghodsi, A.; Joseph, A.D.; Katz, R.; Shenker, S.; Stoica, I. Mesos:
A Platform for Fine-Grained Resource Sharing in the Data Center. In Proceedings of the 8th USENIX
Conference on Networked Systems Design and Implementation, San Jose, CA, USA, 25–27 April 2012;
pp. 429–483.
24. Curtis, A.R.; Kim, W.; Yalagandula, P. Mahout: Low-overhead datacenter traffic management using
end-host-based elephant detection. In Proceedings of the 2011 Proceedings IEEE INFOCOM, Shanghai,
China, 10–15 April 2011; pp. 1629–1637.
25. Kanagavelu, R.; Mingjie, L.N.; Mi, K.M.; Lee, B.; Francis; Heryandi. OpenFlow based control for re-routing
with differentiated flows in Data Center Networks. In Proceedings of the 18th IEEE International Conference
on Networks, Singapore, 12–14 December 2012; pp. 228–233.
26. Khurshid, A.; Zou, X.; Zhou, W.; Caesar, M.; Godfrey, P.B. Veriflow: Verifying network-wide invariants in
real time. ACM SIGCOMM Comput. Commun. Rev. 2012, 42, 467–472. [CrossRef]
27. Tso, F.P.; Pezaros, D.P. Baatdaat: Measurement-based flow scheduling for cloud data centers. In Proceedings
of the 2013 IEEE Symposium on Computers and Communications (ISCC), Split, Croatia, 7–10 July 2013.
28. Li, J.; Chang, X.; Ren, Y.; Zhang, Z.; Wang, G. An Effective Path Load Balancing Mechanism Based on SDN.
In Proceedings of the IEEE 13th International Conference on Trust, Security and Privacy in Computing and
Communications, Beijing, China, 24–26 September 2014; pp. 527–533.
29. Li, D.; Wang, S.; Zhu, K.; Xia, S. A survey of network update in SDN. Front. Comput. Sci. 2017, 11, 4–12.
[CrossRef]
30. Jain, S.; Kumar, A.; Mandal, S.; Ong, J.; Poutievski, L.; Singh, A.; Venkata, S.; Wanderer, J.; Zhou, J.; Zhu, M.;
et al. B4: Experience with a globally-deployed software defined wan. ACM SIGCOMM Comput. Commun. Rev.
2013, 43, 3–14. [CrossRef]
31. Alizadeh, M.; Atikoglu, B.; Kabbani, A.; Lakshmikantha, A.; Pan, R.; Prabhakar, B.; Seaman, M. Data center
transport mechanisms: Congestion control theory and IEEE standardization. In Proceedings of the 46th
Annual Allerton Conference on Communication, Control, and Computing, Urbana-Champaign, IL, USA,
23–26 September 2008; pp. 1270–1277.
32. Duan, Q.; Ansari, N.; Toy, M. Software-defined network virtualization: An architectural framework for
integrating SDN and NFV for service provisioning in future networks. IEEE Netw. 2016, 30, 10–16. [CrossRef]
33. Zhong, H.; Fang, Y.; Cui, J. Reprint of “LBBSRT: An efficient SDN load balancing scheme based on server
response time”. Futur. Gener. Comput. Syst. 2018, 80, 409–416. [CrossRef]
34. Shu, R.; Ren, F.; Zhang, J.; Zhang, T.; Lin, C. Analysing and improving convergence of quantized congestion
notification in Data Center Ethernet. Comput. Netw. 2018, 130, 51–64. [CrossRef]

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).

You might also like