0% found this document useful (0 votes)
28 views8 pages

Atiewiet - Al IOTCSP2018.Final

Uploaded by

Jen Long
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
28 views8 pages

Atiewiet - Al IOTCSP2018.Final

Uploaded by

Jen Long
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/327681923

Impact of Virtualization on Cloud Computing Energy Consumption: Empirical


Study

Conference Paper · September 2018


DOI: 10.1145/3284557.3284738

CITATIONS READS
23 2,250

3 authors, including:

Saleh Atiewi Abdullah Abuhussein


Al-Hussein Bin Talal University St. Cloud State University
26 PUBLICATIONS 656 CITATIONS 46 PUBLICATIONS 754 CITATIONS

SEE PROFILE SEE PROFILE

All content following this page was uploaded by Abdullah Abuhussein on 16 September 2018.

The user has requested enhancement of the downloaded file.


Impact of Virtualization on Cloud Computing Energy
Consumption: Empirical Study
Saleh Atiewi Abdullah Abuhussein Mohammad Abu Saleh
Department of Computer Science Department of Information Systems Department of Computer Science
Al Hussein Bin Talal University St. Cloud State University Al Hussein Bin Talal University
Ma'an, Jordan St. Cloud, MN Ma'an, Jordan
[email protected] [email protected] [email protected]

ABSTRACT music, whenever necessary. These resources are provided such that
Global warming, which is currently one of the greatest cloud clients do not have to be aware of how or from where they
environmental challenges, is caused by carbon emissions. A report are obtaining these materials. Instead, clients only need to be
from the Energy Information Administration indicates that concerned with acquiring broadband connectivity to the cloud.
approximately 98% of CO2 emissions can be attributed to energy Data centers possess powerful computing and storage capabilities.
consumption. The trade-off between efficient and ecologically Important domains, such as particle physics, scientific computing
sound operation represents a major challenge faced by many and simulation, Earth observation, and oil prospecting, are
organizations at present. In addition, numerous companies are supported by data centers. Hundreds to thousands of densely
currently compelled to pay a carbon tax for the resources they use packed blade servers are utilized by data centers to maximize
and the environmental impact of their products and services. management efficiency and space utilization. The energy
Therefore, an energy consumption system can generate actual consumed by data centers increases remarkably as the quantity and
financial payback. Green information technology involves various scale of servers grow; the amount of such energy is directly related
approaches, including power management, recycling, to the number of hosted servers and their respective workloads [1].
telecommunications, and virtualization. This paper focuses on
comparing and evaluating techniques used for reducing energy Numerous scholars have devoted their efforts to improve energy
consumption in virtualized environments. We first highlight the efficiency in cloud environments. In such environments, simple
impact of virtualization techniques on minimizing energy techniques offer basic energy management for servers. These
consumption in cloud computing. Then we present an experimental techniques include placing servers in sleep mode, turning servers
comparative study between two common energy-efficient task on and off, and adopting dynamic voltage/frequency scaling
scheduling algorithms in cloud computing (i.e., the green (DVFS) to adjust the power states of servers. CPU power (and thus,
scheduler, the power saver scheduler). These algorithms are performance level) is regulated by DVFS according to the
discussed briefly and analyzed. The three metrics used to evaluate workload. However, DVFS optimization is limited to CPUs. Using
the task scheduling algorithms are (1) total power consumption, (2) virtualization techniques that can improve resource isolation and
data center load, and (3) virtual machine load. This work aims to decrease infrastructure energy consumption via resource
gauge and subsequently improve energy consumption efficiency in consolidation and live migration is another approach to enhance
virtualized environments. energy efficiency [2]. A number of energy-aware scheduling
algorithms and resource allocation policies have also been
CCS Concepts developed to optimize the total energy consumption in cloud
C.4 [Computer Systems Organization]: Performance of Systems; environments through virtualization methods [3]. Nevertheless, the
different system resource configurations, allocation strategies,
Keywords workloads, and types of tasks running in the cloud cause the energy
Cloud Computing; Virtualization; Energy; Green Cloud; Green consumption and system performance of data centers to vary
Computing; Simulation; Cloud Economics. considerably [4].
1. INTRODUCTION In this paper we explore the effect of virtualization techniques on
Organizations are currently focused on attaining an enduring improving energy consumption and comparatively assess the
information and communications technology (ICT) technique for efficiency (i.e., in terms of energy saving) of task scheduling
their business processes. The major motivation for such intent is to algorithms (i.e., the green scheduler [5], the power saver scheduler
decrease their carbon impact and environmental influences, along [6]) using the following metrics (1) total power consumption, (2)
with reducing their operational costs. In this context, cloud data center load, and (3) virtual machine load metrics. Section 2
computing offers a useful means to achieve these goals. Cloud briefly reviews virtualization with focus on server virtualization.
computing is a promising technology that is becoming increasingly Section 3 and 4 describe the experiment system, simulator
prevalent because it facilitates access to computing resources, such components and workflow. Section 5 defines the data enter model
as programs, storage, expert services, video games, films, and used to conduct the experiment. Section 6 presents the criteria used
for the the evaluation. We demonstrate experiment results and
Permission to make digital or hard copies of all or part of this work for conclude in sections 7 and 8 respectively.
personal or classroom use is granted without fee provided that copies are
not made or distributed for profit or commercial advantage and that 2. VIRTUALIZATION
copies bear this notice and the full citation on the first page. To copy
otherwise, or republish, to post on servers or to redistribute to lists,
Most available computer hardware is designed and architected to
requires prior specific permission and/or a fee. host a single operating system (OS) and application. The primary
CPSIOT’18, September 21–23, 2018, Stockholm, Sweden. solution for this problem is virtualization. The term “virtualization”
Copyright 2018 ACM
has been used since the 1960s to refer to mainframes. At that time, created. Given the sizeable number of data centers, fault
virtualization was a logical method for allocating mainframe tolerance is extremely high, which decreases redundancy.
resources to different applications. Since then, the meaning of the
term has evolved. At present, virtualization refers as the act of 2.1 Server Virtualization
creating a virtual version of something, including (but not limited A virtual server enables merging such that numerous VMs run
to) a virtual computer hardware platform, OS, storage device, or by sharing the same physical server rather than each machine
network resources. That is, virtualization is the ability of a system having its own server, thereby reducing cost. The decrease in
to host multiple virtual computers while running on a single expenditure occurs in the aspects of hardware, administration for
hardware platform. site infrastructure amenities, and space. The provision of VMs
immediately addresses the needs of clients for additional resources,
As an essential aspect of cloud computing, virtualization provides while VM migration ensures the accessibility of services [8].
stability in the aspects of cost and energy efficiency. On the basis
of its definition, virtualization substantially minimizes the number Server virtualization is currently under constant scrutiny from the
of working computers by replicating them to perform within a media and major organizations as a contributor to green IT; this
single physical computer through software execution, thereby concept was first introduced by the IBM Corporation in the 1960s
reducing carbon emission and energy costs [7]. as a method for the simultaneous timesharing of mainframe
computers [11]. Server virtualization was further developed to
Virtualization is the one of the most efficient methods for achieving incorporate a hardware abstraction layer known as a virtual
energy efficiency. It is applicable to conventional and cloud data machine monitor (VMM) that enables interaction between the
centers. In the former, virtualization is applied depending on the hardware and software layers [12]. However, [13] indicated that the
extant policy and the need to utilize such method. In the latter, concept was only transformed from being strictly applied to
virtualization plays an important role in energy efficiency, and thus, mainframes to being used with the industry standard 86x hardware
is highly recommended. Every component of information in 1999 when virtualization was adopted by VMware.
technology (IT), including servers, desktops, applications, Consequently, a standard 86x server obtained the capability of
management probes, input/output, local area networks, switches being partitioned into several VMs that use virtualized components.
and routers, storage systems, wide area network optimization This characteristic allowed for an independent and concurrent
controllers, application delivery controllers, and firewalls, can be processing of different OSs and software applications. Although
virtualized. The five main forms of virtualization involve the [14] claimed that the ability to run multiple VMs on a single server
server, desktop, appliances, storage, and network. Considering the could reduce hardware costs and IT department overhead, [15]
association among these forms, network virtualization has been argued that this feature potentially creates a single point of failure
selected as the focal topic because it is the most significant among because these VMs solely depend on the physical server to function
these forms [8]. Virtualization is a technique for running multiple properly. In [16], VMMs were classified into type I hypervisor
independent virtual OSs on a single physical computer [9], thereby (OS-level virtualization) and type II hardware virtualizer
maximizing the return on investment for a computer. The term was (hypervisor virtualization).
coined in the 1960s in reference to a virtual machine (VM;
occasionally called a pseudo-machine). The creation and 3. SYSTEM MODEL
management of VMs is frequently referred to as platform The system model is developed based on [17,18,19]. Figure 1
virtualization. Platform virtualization is performed on a computer depicts this model, which mainly consists of the user, task, VM
(hardware platform) using a software called a control program. This manager, scheduling algorithm, servers, and energy meter.
program creates a simulated environment or a virtual computer.
The virtual computer enables the device to use a hosted software 1. User: represents the cloud user who will send tasks to the cloud
specific to the virtual environment (occasionally called guest computing data center (DC).
software).
Virtualization manages workload by making traditional computing
highly scalable, efficient, and economical. A wide range of system
layers, such as OSs, hardware, and servers, can benefit from the
application of visualization [10].
Virtualization technology offers numerous advantages in cloud
computing environments [10], including the items listed as follows.
• Server consolidation: Through this concept, 10 server
applications that previously require as many physical
computers can be run on a single machine, thereby providing
a unique OS and technical specification environments for
operating various applications.
Figure 1. System Model
• Energy consumption: With server virtualization, the server
can support multiple VMs and will probably have more 2. Task: refers to the task sent by cloud users to the cloud
memory, CPUs, and other hardware that will require minimal computing DC. Each task has the following elements: size,
or no additional power and will occupy the same physical maximum completion time, and ID number.
space, thereby reducing utility costs and power consumption. 3. VM manager: handles the received tasks after accepting the VM
status and decision from the power saver scheduling algorithm
• Redundancy: This concept essentially refers to the repetition (PSSA).
of data that is mainly encountered when systems do not share 4. PSSA: schedules the tasks of cloud users according to the
a common storage and different memory storage units are information sent from the VM manager.
5. VM and servers: execute the tasks of users and resend them. The main configuration for the GreenCloud simulator used in this
6. Energy meter: computes the energy consumed in DC. work is provided in Table 1. The table lists the components of the
proposed system and their specifications.
4. SIMULATION TOOL
The architecture and main features of the cloud computing Table 1. GreenCloud Configuration
simulator used in this study are explained in this section. Social Parameters Value
networking, content delivery, web hosting, and real-time DC type Three–tier topology
instrumented data processing are examples of traditional and No. of core switches 2
emerging cloud-based applications. These types of application No. of aggregation switches 4
possess different compositions, configurations, and deployment No. of access switches 8
requirements. Quantifying the performance of scheduling and No. of servers 1,440
allocation policies in real cloud environments under different Access links 1 Gb
conditions and various applications and service models is
Aggregation links 1 Gb
extremely difficult due to the following: (i) users have
Core links 10 Gb
heterogeneous and conflicting quality of service requirements and
DC load 0.1, 0.2, 0.3, …, 0.9, 1.0
(2) clouds have varying demands, supply patterns, and system
sizes. When real infrastructures, such as Amazon EC2, are adopted, Simulation time 60 min
experiments are limited to the infrastructure scale, and reproducing Power management in server DVFS and DNS
the results becomes challenging. This situation arises because the Task size 8,500 bit
conditions prevailing in an Internet-based environment cannot be Task deadline 20 s
controlled by resource allocation developers and application Task type High-performance computing
scheduling algorithms [20]. Therefore, we used the GreenCloud 5. Data Center Design Model
simulator, which can be applied to develop novel solutions for Commonly adopted network architectures in data centers include
monitoring, resource allocation, workload scheduling, and multi-tier architectures, i.e., two-tier (2T), three-tier (3T), and 3T
optimization of communication protocols and network high-speed (3Ths) architectures [21]. 3T is the most popular
infrastructure. The GreenCloud simulator is an extension of the architecture in large-scale data centers. In such architecture, the
well-known NS2 network simulator and was released under the core layer connects the data center to the Internet backbone, the
General Public License Agreement [5]. aggregation layer provides diverse functions (such as content
4.1 Simulator Architecture switching, Secure Sockets Layer, and firewalls), and the access
Figure 2 shows the structure of the GreenCloud simulator using layer connects the internal data servers that are arranged in a rack–
three-tier data center architecture. blade assembly. Multiple links are present from one tier to another.
These links, along with multiple internal servers, ensure availability
The main components of this simulator are listed as follows [5]: and fault tolerance in the data center, but at the cost of generating
1. Servers that form data center in the cloud, are used to run tasks. redundancy.
2. Switches and links constitute network topology and the resulting Server farms in current DCs include over 100,000 hosts, in which
connections by providing different cabling solutions. 70% of all communication activities are internally performed [22].
3. Workloads are considered objects that model various cloud user The most frequently applied DC architecture is the 3T architecture.
services, such as instant messaging and social networks. The three layers of the DC architecture, namely, the core,
4.2 Simulator Implementation aggregation, and access networks, are presented in Figure 2 [23].
In this experiment, GreenCloud was used to test, evaluate, and The 3T DC topology selected for the simulations includes 1,440
compare the adopted and proposed algorithms. Implementation was servers, which are set into 16 racks (i.e., 90 servers per rack). The
realized by modifying the original source code of the simulator. The racks are linked using 2 cores, 4 aggregations, and 8 access
original source code was written in C++ and the Tool Command switches. The network links that connect the aggregation switches
Language and was based on the NS2 network simulator. Eclipse to the core have a data rate of 10 Gb/s. The links that connect the
Standard version 4.4 editor was used for the modification. aggregation and access switches, along with the access links that
connect computing servers to the top-of-rack switches, have a data
Figure 3 provides a general view of the simulation steps. The
rate of 1 Gb/s. The propagation delay of all the links is fixed at 3.3
GreenCloud simulator is set up and installed during the pre-
µs. Table 1 summarizes the simulation setup parameters [24].
simulation phase, and the simulator configurations are read from
the files. In the next step, the data center is created, and the cloud 6. SYSTEM PARAMETERS
network is developed. This step requires the simulation The criterion for evaluating the virtualized environment is
configuration settings that represent the network and the servers’ introduced in this section. Two types of parameters are used: input
specifications. Notably, each server may have its own and output. Input parameters configure the system, whereas output
specifications for forming a heterogeneous paradigm. parameters measure system performance.
Subsequently, the simulator initiates an event for the arrival of each
task to the system. After the events are triggered, the simulator 6.1 Input Parameters
begins to execute the scheduling algorithm to map the tasks onto The following input parameters are fed to the simulator before it
appropriate VMs. Then, the simulator begins monitoring the starts.
execution of tasks and recording the ending time of execution and • Number of DCs: Given that we are focusing on a VM in DC
the consumed energy in special tracing files. When all the tasks and the consumed energy in DC, only one DC is assumed to
have passed through the GreenCloud simulator, simulation stops be present.
and the post-simulation phase begins. This phase involves reading
the tracing files and sending the results to an Excel sheet for
analysis.
Workload
Generator
Cloud
Cloud
User
User
Workload
Trace File
Task
Task
Task Data Center
Scheduler Characteristics

Data C enter

Core L3 Switch TaskCom


Network Agent

L3 Energy
model

Aggregation
Network
Connect ()

L2 Energy
model
Access
Network 1 RU Rack Switch
TaskCom
Sink
Computing Server

S S S S S S S S S

Links
Server Server 10 GE 1 GE
Scheduler
Energy model Characteristics

Figure 2. GreenCloud Simulator: A 3-Tier Architecture [5]

Figure 3. Simulation Steps


• Number of VMs in DC and their specifications: Number of • Makespan: The maximum completion time of all the received
VMs in DC that are dedicated to finishing all the submitted tasks per unit time. This parameter indicates the quality of job
tasks and the specifications of these VMs assignment to resources in terms of execution time. This
parameter can be written formally as in equation 1.
• Number of tasks submitted and their specifications: A set
of tasks is generated and submitted to DC, and each task has a 𝑀𝑎𝑘𝑒𝑠𝑝𝑎𝑛 = 𝑚𝑎𝑥{𝐹𝑇𝑗 |∀𝑗 ∈ 𝑙} (1)
deadline and size. The scheduler should handle tasks
according to their specifications. Where 𝐹𝑇𝑗 denotes the completion time of task 𝑗 that belongs to
task list 𝑙.
• Scheduling algorithm: The manner in which tasks are
mapped onto VMs affects the simulation results. In each • Throughput: The number of executed tasks is calculated to
experiment, the algorithm that maps the tasks onto VMs is study the efficiency of meeting task deadlines. This parameter
presented as an input parameter. In this research, we adopt two is calculated using Equation 2.
task scheduling algorithms: the green scheduler algorithm and
𝑇ℎ𝑟𝑜𝑢𝑔ℎ𝑝𝑢𝑡 𝑙 = ∑𝑗∈𝑙 𝑋𝑗 (2)
PSSA.
Where 𝑋𝑗 is
6.2 Output Parameters
Several performance metrics are used to test and evaluate the 1, 𝑡𝑎𝑠𝑘 𝑗 ℎ𝑎𝑠 𝑓𝑖𝑛𝑖𝑠ℎ𝑒𝑑 𝑒𝑥𝑒𝑐𝑢𝑡𝑖𝑜𝑛
proposed models. These parameters determine system efficiency 𝑋𝑗 = { (3)
according to the input parameters. The output parameters are 0, 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒
described as follows:
• Task failure: The number of task failures indicates the 7. RESULTS
number of tasks that fail to meet their deadlines as shown in To demonstrate the impact of server virtualization on the server’s
Equation 4. energy consumption, two experiments were conducted to measure
𝐹 = ∑𝑗∈𝑙 1 − 𝑋𝑗 (4) the following parameters: server’s energy consumption, DC load,
and makespan. That is, PSSA under the GreenCloud simulator is
Where 𝐹 is the number of is failed tasks; and 𝑋𝑗 is the decision used to compare two simulation scenarios (with and without server
variable that indicates the task’s completion time, which is virtualization).
provided in Equation 3.
7.1 Makespan
• DC and server loads: The DC load represents the percentage Figure 5 depicts the makespan and the set of 20 experiments
of computing resources that are allocated for incoming tasks conducted on 10 groups of data center loads using the proposed
with respect to data center capacity. This load should be algorithm under two scenarios. The first scenario involved running
between 0 and 100%. A load close to 0 indicates an idle data DC with virtualized servers, such that all tasks were sent to the VM.
center, whereas a load equal to 100% denotes a saturated data By contrast, the second scenario required running the DC without
center [5]. To calculate DC and server loads, let 𝑆 be the set server virtualization, such that all tasks were sent directly to the
of 𝑀 servers in DC, where 𝑆 = {𝑆1 , 𝑆2 , … , 𝑆𝑀 }. Each server physical machine (PM). The value of this parameter reflects the
𝑠𝑖 has nominal million instructions per second (MIPS; N), ending time only for tasks that meet their deadlines.
which denotes the maximum computing capability of the In this experiment, the non-virtualized environment performed
server at the maximum frequency. Server load 𝐶 is the current better in terms of makespan. This outcome is attributed to the
load of the server in MIPS. Equation 5 indicates the load for scheduler sending the tasks directly to the PM instead of the VM,
each server si, which is equal to the ratio of the current server which requires additional time for each task to be created at each
load to the maximum computing capability. PM.
𝐶(𝑠𝑖 )
𝑙𝑜𝑎𝑑 (𝑠𝑖 ) = (5) Power Saver Algorithm with Server Virtualization
𝑁(𝑠𝑖 )

The DC load computed using Equation 6 is equal to the Power Saver Algorithm without Server Virtualization
average load of all its hosts. 30
𝐶(𝑠𝑖)
∑∀𝑠∈𝑆 25
𝑁(𝑠𝑖 )
𝑙𝑜𝑎𝑑 (𝐷𝐶) = (6)
𝑀
20
Time (min)

• DC energy consumption: The total energy consumption in


15
DC represents the sum of the energy consumed by the servers
and switches [5]. In this research, we focus on the energy of 10
servers and network switches. Hence, the power consumption
5
of an average server can be expressed using Equation 7.
0
𝑃 = 𝑃𝑓𝑖𝑥𝑒𝑑 + 𝑃𝑓 . 𝑓 3 (7)
10%

20%

30%

40%

50%

60%

70%

80%

90%

100%
Where 𝑃𝑓𝑖𝑥𝑒𝑑 accounts for the portion of the consumed power that
DC Load
does not scale with the operating frequency 𝑓 , and Pf is the
frequency-dependent CPU power consumption. Figure 5. Makespan at Different Loads for Virtualized
and Non-virtualized DC
The power consumption of a switch can be expressed using
Equation 8. 7.2 Energy Consumption of Server
Figure 6 presents the energy consumed by the servers at different
𝑃𝑠𝑤𝑖𝑡𝑐ℎ = 𝑃𝑐ℎ𝑎𝑠𝑠𝑖𝑠 + 𝑛𝑙𝑖𝑛𝑒𝑐𝑎𝑟𝑑𝑠 + 𝑃𝑙𝑖𝑛𝑒𝑐𝑎𝑟𝑑 + ∑𝑅𝑖=0 𝑛𝑝𝑜𝑟𝑡𝑠,𝑟 + 𝑃𝑟 (8)
DC loads for PSSA. The experiment was conducted 20 times. The
Figure 4 shows the detailed components of switch energy first 10 experiments were for non-virtualized servers with different
consumption, where 𝑃𝑠𝑤𝑖𝑡𝑐ℎ is the total power consumed for the
switch, 𝑃𝑐ℎ𝑎𝑠𝑠𝑖𝑠 is the consumed power for the switch’s chassis Power Saver algorithm with server virtualization
Power Saver algorithm without server virtualization
(hardware), 𝑛 is the number of line cards in the switch, Plinecard is
10000
the power consumed with any active switch line card, and
Energy Consumed by Servers w*h

𝑃𝑟 represents the power consumed by a port (transceiver) that runs 8000


at bit rate 𝑟.
6000

4000

2000

0
10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
DC Load

Figure 4. Detailed Components of Switch Energy Figure 6. Total Server Energy Consumption for
Consumption Virtualized and Nonvirtualized DCs
DC loads ranging from 10% to 100%, whereas the last 10 Power Saver Algorithm without server virtualization
experiments were for virtualized servers with different DC loads Power Saver Algorithm with server virtualization
ranging from 10% to 100%. When DC load was increased, the total
80%
energy consumed by the servers increased under both scenarios.
However, the virtualized servers consume less energy than their 60%
non-virtualized counterparts.

Server Load
40%
In this experiment, the virtualized environment achieved higher
energy saving than the non-virtualized environment. The total 20%
energy saving is approximately 6–45 Wh (approximately 52–389
kWh annually) because all the tasks were sent to the VM instead of 0%

145
217
289
361
433
505
577
649
721
793
865
937
1

1009
1081
1153
1225
1297
1369
73
the PM, thereby placing all other PMs in Dynamic Shutdown
(DNS): mode and making them consume less energy.
Server Number
7.3 DC Load Figure 8. Server Loads for Virtualized and Non-
Figure 7 depicts the DC load and a set of 20 experiments conducted virtualized DCs
on 10 groups of data center loads using the proposed algorithm. The
first 10 experiments were for non-virtualized servers with different 695
DC loads ranging from 10% to 100%, whereas the last 10 690

Number of servers
experiments were for virtualized servers with different DC loads 685
ranging from 10% to 100%. The value of this parameter indicates 680
the current DC load for all submitted tasks and DC hosts. 675
670
Furthermore, Figure 7 illustrates a continuous load difference 665
between the two sets of experiments. Notably, the power saver 660
scheduler with virtualized servers demonstrated a noticeable 655
improvement under loads ranging from 30% to 100% compared 650
with the power saver scheduler with non-virtualized servers. Power Saver Algorithm Power Saver Algorithm
without server with server virtualization
The definition of DC load in Section 6.2 suggests that when more virtualization
servers are running, a higher DC load will be obtained because the
PSSA creates VMs over the PM, thereby decreasing the number of Figure 9. Number of Running Servers for Virtualized and
running servers and reducing DC load. Non-virtualized DCs

Power Saver Algorithm with server virtualization 8. CONCLUSION


Power Saver Algorithm without server virtualization PSSA and green algorithm were adopted to demonstrate the effect
80.0% of server virtualization on DC energy consumption. The
70.0% experiment was conducted using the GreenCloud simulator. Three
60.0% parameters were used in the experiments: makespan, DC load, and
server’s energy consumption. The results indicated that the
DC Load

50.0%
virtualized DC environment exhibited better performance than the
40.0%
non-virtualized environment in terms of the DC load and server
30.0%
energy consumption parameters. Nevertheless, the main drawback
20.0% of the virtualized environment is the makespan parameter.
10.0%
0.0% The pool of VMs in a cloud computing data center must be
10% 20% 30% 40% 50% 60% 70% 80% 90% 100% managed using an efficient task scheduling algorithm to maintain
Simulation Load quality of service and resource utilization. Evidently, VM failure
decreases total system throughput. This issue can be resolved via a
Figure 7. Total DC Workload for Virtualized and VM recovery process that allows the cloning of VMs to another
Nonvirtualized DCs host. VM migration should be considered when allocating
resources for urgent jobs because transferring a huge amount of
7.4 Server Load data belonging to a VM may decrease total performance.
Figure 8 shows the server loads for all 1,440 servers after
distributing 41,436 tasks, whereas Figure 9 presents the number of Future work should focus on power consumption techniques
running servers for virtualized and non-virtualized DCs. Figures 8 relevant to network switches and storage area networks. Future
and 9 indicate that the PSSA with server virtualization finished research must also emphasize the development of different
41,436 tasks earlier by operating 666 servers at high loads, thereby workload consolidation and traffic aggregation techniques.
minimizing energy consumption. By contrast, the DC with non-
virtualized servers used 689 servers to run the same number of tasks 9. REFERENCES
at low loads. However, in contrast with the DC with virtualized [1] Ye, K., Huang, D., Jiang, X., Chen, H., & Wu, S. (2010).
servers, the DC with non-virtualized servers was less efficient in Virtual Machine Based Energy-Efficient Data Center
reducing the amount of energy required to finish the tasks. Architecture for Cloud Computing: A Performance
Perspective. 2010 IEEE/ACM Int’l Conference on Green
Computing and Communications & Int'l Conference on
Cyber, Physical and Social Computing, 171–178. [14] Panek W., Wentworth T. Mastering Microsoft Windows 7
doi:10.1109/GreenCom-CPSCom.2010.108 Administration. John Wiley and Sons; Hoboken, NJ, USA:
[2] C. Clark, K. Fraser, S. Hand, J. G. Hansen, E. Jul, C. 2010.
Limpach, I. Pratt, and A. Warfield, "Live migration of virtual [15] Kappel J.A., Velte A.T., Velte T.J. Microsoft Virtualization
machines," in the 2nd Symposium on Networked Systems with Hyper-V. McGraw Hill Professional; New York, NY,
Design and Implementation (NSDI 2005), Boston, USA: 2009.
Massachusetts, USA, 2005, pp. 273-286. [16] Goldberg R.P. Architecture of Virtual Machines.
[3] R. Raghavendra, P. Ranganathan, V. Talwar, Z. Wang, and Proceedings of the Workshop of Virtual Computer Systems;
X. Zhu, "No "power" struggles: coordinated multi-level New York, NY, USA. 30 April 1973; pp. 74–112.
power management for the data center," in the 13th [17] Kliazovich, D., Bouvry, P., & Khan, S. U. (2010). DENS:
International Conference on Architectural Support for Data Center Energy-Efficient Network-Aware Scheduling.
Programming Languages and Operating Systems (ASPLOS 2010 IEEE/ACM Int’l Conference on Green Computing and
2008), Seattle, WA, USA, 2008, pp. 48-59 Communications & Int'l Conference on Cyber, Physical and
[4] Z. Zhang and S. Fu, "Characterizing power and energy usage Social Computing, 16(1), 69–75.
in cloud computing systems," in the 3rd IEEE International http://doi.org/10.1109/GreenCom-CPSCom.2010.31
Conference on Cloud Computing Technology and Science [18] Buyya, R., Ranjan, R., & Calheiros, R. N. (2010).
(CloudCom 2011), Athens, Greece, 2011, pp. 146-153. InterCloud : Utility-Oriented Federation of Cloud Computing
[5] Kliazovich, D., Bouvry, P., & Khan, S. U. (2012). Environments for Scaling of. In International Conference on
GreenCloud: A packet-level simulator of energy-aware cloud Algorithms and Architectures for Parallel Processing (pp.
computing data centers. Journal of Supercomputing, 62(3), 13–31). http://doi.org/10.1007/978-3-642-13119-6_2
1263–1283. http://doi.org/10.1007/s11227-010-0504-1 [19] Wu, C. M., Chang, R. S., & Chan, H. Y. (2014). A green
[6] Atiewi, S., Yussof, S., & Ezanee, M. (in press). A power energy-efficient scheduling algorithm using the DVFS
saver scheduling algorithm using DVFS and DNS techniques technique for cloud datacenters. Future Generation Computer
in cloud computing datacentres. International Journal of Systems, 37, 141–147.
Grid and Utility Computing. http://doi.org/10.1016/j.future.2013.06.009
[7] Pike Research, "Cloud Computing Energy Efficiency," Pike [20] Buyya, R., Ranjan, R., & Calheiros, R. N. (2009). Modeling
Research, Boulder, 2010. and Simulation of Scalable Cloud Computing Environments
[8] D. Talbot, "Greener Computing in the cloud," Technology and the CloudSim Toolkit : Challenges and Opportunities, 1–
Review, 24 May 2018. [Online]. Available: 11.
http://www.technologyreview.com/business/23520/page1/. [21] Cisco. (2007). Cisco Data Center Infrastructure 2.5 Design
[9] Ou, G. (2006). Introduction to server virtualization. Guide. Cisco, (6387).
TechRepubliccom, 5, 1. Retrieved from [22] Mahadevan, P., Sharma, P., Banerjee, S., & Ranganathan, P.
http://articles.techrepublic.com.com/5100-10878_11- (2009). A power benchmarking framework for network
6074941.html devices. In Lecture Notes in Computer Science (including
[10] Malhotra, L., Agarwal, D., & Jaiswal, A. (2014). subseries Lecture Notes in Artificial Intelligence and Lecture
Virtualization in cloud computing. J Inform Tech Softw Eng, Notes in Bioinformatics) (Vol. 5550 LNCS, pp. 795–808).
4(136), 2. http://doi.org/10.1007/978-3-642-01399-7_62

[11] Creasy R.J. The Origin of the VM 370 Time-Sharing [23] Baliga, J., Ayre, R. W. A., Hinton, K., & Tucker, R. S.
System. IBM J. Res. Dev. 1981;25:483–490. (2011). Green Cloud Computing: Balancing Energy in
Processing, Storage, and Transport. Proceedings of the IEEE,
[12] Rao K.T., Kiran P.S., Reddy L.S.S. Energy Efficiency in 99(1), 149–167.
Datacenters through Virtualization: A Case Study. Comput.
Sci. Technol. 2010;10:2–6. [24] Cisco. (2009). Cisco Data Center Interconnect Design and
Implementation Guide. System. Retrieved from
[13] Szubert D. The Register; 2007. Virtualisation Gets Trendy. http://www.cisco.com/en/US/solutions/collateral/ns340/ns51
Available 7/ns224/ns949/ns304/ns975/data_center_interconnect_design
online:http://www.theregister.co.uk/2007/06/06/virtualisation _guide.pdf
_gets_trendy

View publication stats

You might also like