A Critical Survey of Live Virtual Machine Migratio
A Critical Survey of Live Virtual Machine Migratio
Abstract
Virtualization techniques effectively handle the growing demand for computing, storage, and communication
resources in large-scale Cloud Data Centers (CDC). It helps to achieve different resource management objectives like
load balancing, online system maintenance, proactive fault tolerance, power management, and resource sharing
through Virtual Machine (VM) migration. VM migration is a resource-intensive procedure as VM’s continuously
demand appropriate CPU cycles, cache memory, memory capacity, and communication bandwidth. Therefore, this
process degrades the performance of running applications and adversely affects efficiency of the data centers,
particularly when Service Level Agreements (SLA) and critical business objectives are to be met. Live VM migration is
frequently used because it allows the availability of application service, while migration is performed. In this paper, we
make an exhaustive survey of the literature on live VM migration and analyze the various proposed mechanisms. We
first classify the types of Live VM migration (single, multiple and hybrid). Next, we categorize VM migration techniques
based on duplication mechanisms (replication, de-duplication, redundancy, and compression) and awareness of
context (dependency, soft page, dirty page, and page fault) and evaluate the various Live VM migration techniques.
We discuss various performance metrics like application service downtime, total migration time and amount of data
transferred. CPU, memory and storage data is transferred during the process of VM migration and we identify the
category of data that needs to be transferred in each case. We present a brief discussion on security threats in live VM
migration and categories them in three different classes (control plane, data plane, and migration module). We also
explain the security requirements and existing solutions to mitigate possible attacks. Specific gaps are identified and
the research challenges in improving the performance of live VM migration are highlighted. The significance of this
work is that it presents the background of live VM migration techniques and an in depth review which will be helpful
for cloud professionals and researchers to further explore the challenges and provide optimal solutions.
Keywords: Cloud computing, Virtual machine migration, Virtualization, Pre-copy technique, Post-copy technique,
Security
© The Author(s). 2017 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and
reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the
Creative Commons license, and indicate if changes were made.
amount of electricity. Further to provide guaranteed ser- resource contention among co-hosted applications that
vices, on an average 30% of servers remains in idle mode leads to servers over-utilization which results in applica-
and approximately 10–15% of server capacity is used for tion performance degradation [15, 28–31]. Also a large
fulfillment of resource demands [7]. The under utilization number of cloud applications like interactive applications
or over provisioning of resources result in a phenome- experience frequently changeable workload requests that
nal increase in operational cost and power consumption generate dynamic resource demand which results in Ser-
[8, 9]. In 2013, it was estimated that Google data centers vice Level Agreement (SLA’s) violation and performance
consume approximately 260 million Watts of electricity, degradation if dynamic server consolidation is used.
which is enough power to give continuous electricity to To resolve above stated issues, hypervisor selects appro-
more than 200,000 houses [10, 11]. In 2014, it has been priate VM/VM’s and migrate them from over-utilized
estimated that IT would contribute only 25% to the overall servers to under-utilized servers for improving the perfor-
cost of operating a CDCs whereas about 75% of the total mance. During the process of VM migration, VM’s con-
cost would contribute to infrastructure and power con- tinuously demand additional resources for migration that
sumption [12]. One of the basic solutions of such problem adversely affect the performance of running application
is to switch the idle mode server to either sleep mode or until VM migration completes. So migration process must
off mode based on resource demands, that leads to great be finished within minimal time (to release the acquired
energy saving because idle mode server consumes 70% of resources in the short time) by using the optimal targeted
their peak power [13]. server and network bandwidth to get improve migrat-
Virtualization technology was developed by IBM in ing application performance, and migration transparency
1960 to maximize the utilization of hardware resources [14, 32, 33]. Hence the role of VM migration is bifold, facil-
because powerful and expensive mainframe computers itating improvement in resource utilization and increasing
were underutilized. It is a thin software layer running provider’s profit.
between Operating System (OS) and system hardware, For VM migration, hypervisor exploits live VM migra-
termed as a Virtual Machine Monitor (VMM) or hypervi- tion [34] for moving VM’s between respective servers
sor, that control, manage, and mapped multifarious VM’s using shared or dedicated resources. Live VM migration
(applications running on guest OS) on a single platform continuously provides the service without interrupting
[14–16]. Also, it is a complete software and hardware the connectivity of running application during migration
stack to fulfill the incoming request or provide a service time to obtain seamless connectivity, avoiding SLA vio-
to users [17]. Examples of popular virtualization software lation and to get optimal resource utilization. It is also
are VMware ESX and ESXi [18], Kernel-based Virtual used in adaptive application resource remapping [35]. It
Machine (KVM) [19, 20]/Quick Emulator (QEMU), Cit- is a very useful technique in cluster and cloud environ-
rix XenServer [21], Microsoft Virtual PC [22], Microsoft ment. It has many benefits like load balancing, energy
Hyper-V [23], Oracle VM VirtualBox [24], and Parallels saving, preserving service availability. It also avoids pro-
Desktop for Mac [25]. The main advantage of virtualiza- cess level problem such as residual dependencies [33],
tion is to provide better resource utilization by running process dependency on its original (source) node. VM
multiple VM’s parallels on a single server. Hypervisor migration controller migrates a single VM [36] or mul-
supports the legacy OS to combine numerous under- tiple VM’s (cluster VM set) [34] on Local-Area Network
utilized servers load onto a single server and also sup- (LAN) or Wide-Area Networks (WAN) network for effi-
port fault tolerance and performance isolation to achieve cient management of the resources. If VM migration is
better cloud data centers performance. Due to VM’s iso- performed within LAN [34] servers then it is easy to
lation, failure of one VM does not have an effect on handle because storage migration is no longer required
execution/functioning of other VM’s and on the entire in Network Attached Storage (NAS) integrated data cen-
physical machine [26]. To improve CDC efficiency, differ- ter architecture. Also, the network management within
ent types of resource management strategies like server LAN requires minimal effort because IP address of the
consolidation, load balancing, server up-gradation, power corresponding server remains unchanged. VM migration
management etc. are applied through migration of sin- over a WAN network [37] takes a considerable amount
gle/multiple VM’s. Also to achieve energy efficient envi- of migration time because the transfer of storage migra-
ronment, it combines numerous servers’ loads onto a tion, limited availability of network bandwidth, IP address
few physical servers and switch off the idle servers. For management, packet routing, network congestion, and
improving application performance, hypervisor also helps the faulty behavior of WAN links having considerable
to migrate the running VM’s from a low-performing to overheads.
another better performing physical server [27]. Conse- Now a days, most of the hypervisors support live migra-
quentially, co-hosting several different types of VM’s onto tion but the implementation of live migration with a
a few servers is a challenging issue for researchers because little or no consideration towards its security. Hence live
migration might be susceptible to range of attacks from is also explored by comprehensive analysis of exist-
Denial-of-Service (DoS) attacks to Man-In-The-Middle ing approaches. A survey on mechanisms for live VM
(MITM) attacks. During the migration, data can be tam- migration is presented by Yamada [42], covering exist-
pered or sniffed easily as it is not encrypted. Thus com- ing software mechanisms that help and support in live
promising confidentiality and integrity of migrating data. migration. They reveal research issues that not covered
These security threats in live VM migration discourages by existing works like migration over high speed LAN,
many sectors, such as financial, medical, and government, migration of nested VMM, and migration of VM attached
from taking advantage of VM live migration. Hence, secu- to pass-through accelerator. The techniques are classified
rity is the critical challenge that needs examination to into two categories: performance and applicability. In a
provide secure live VM migration. long-distance network, how the live migration and dis-
In the literature, few surveys highlight the importance aster recovery are performed with necessary operations
of VM migration in a cloud environment. Soni and is addressed by Kokkinos et al. [43]. They focus on new
Kalra [38] reviewed different existing techniques which technologies and protocols used for live migration and
concentrate on minimization of total migration time and disaster recovery in different evolving networks.
downtime to avoid service degradation. Kapil et al. [39] In our work, we address the limitations of existing sur-
performed a summarized review of existing live migration veys [26, 32, 38–43] and present comprehensive survey
techniques based on pre-copy and post-copy migration. on state-of-the-art live VM migration techniques. We
They considered total migration time, service downtime, consider different important aspects of VM migration
and amount of data transferred as a key performance while incorporating the trade-off among application
metrics for comparison. They mention some research performance, total migration time, network bandwidth
challenges like the type of network (LAN/WAN), link optimization for meeting the resource management
speed, page dirty rate, type of workload, address wrapping objectives. Our major contributions in this paper can be
and available resources. Further different aspects of mem- summarized as follows:
ory migration, process migration, and suspend/resume
based VM migration techniques have been surveyed by 1. Comprehensive literature review of state-of-the-art
Medina and Garcia [26]. In this, few VM migration tech- live VM migration techniques and description of
niques are included and no comparison is performed. The strengths, weaknesses, and critical issues that require
authors have not considered performance parameters further research.
of currently running applications under VM migration, 2. Definition of key aspects of migration process like
network bandwidth optimization, and hybrid VM migra- CPU state, memory content and disk storage that
tion technique for improving migration process. Xu et al. affect total migration time and understanding of type
[32] present a survey on performance overheads of VM of memory and storage content that need to be
migration within inter-CDC, intra-CDC, and servers. migrated.
Their proposed classification does not consider different 3. Discussion on the various the performance metrics
aspects of VM migration, timing metrics, migration that affect VM migration process.
pattern, and granularity of VM migration for highlighting 4. Discussion of various security threats and their
the application performance and resource consump- categories in live VM migration and explanation of
tion trade-off. A comprehensive survey has performed security requirements and existing solutions to
by Ahmad et al. [40] covering different VM migration mitigate possible attacks.
points like VM migration patterns, objective functions, 5. Classification of the existing migration mechanisms
application performance parameters, network links, into three basic categories: type of live VM
bandwidth optimization, and migration granularity. They migration, duplication based VM migration and
reviewed state-of-the-art live VM migration and non-live context aware migration based on the objectives and
VM migration techniques. But the authors did not show techniques used.
any analysis based on performance parameters of VM 6. Identification of specific gaps and research challenges
migration. Moreover, they did not describe the weakness to improve the performance of live VM migration.
of reviewed techniques. In their extended survey work,
Ahmad et al. [41] presented a review on state-of-the-art The paper is organized as follows: “Background” section
network bandwidth optimization approaches, server presents the background of live VM migration and explain
consolidation frameworks, Dynamic Voltage Frequency the various components, important features and lim-
Scaling (DVFS)-enabled storage and power optimiza- itations. In “Types of live virtual machine migration”
tion methods over WAN connectivity. They proposed section, types of live VM migration techniques - pre-
a thematic taxonomy to categorize the Live VM migra- cpoy, post-copy and hybrid techniques are presented.
tion approaches. The critical aspects of VM migration Brief overview of live VM migration models are presented
and a generic model is proposed in “Live virtual machine interconnections, such as InfiniBand, to enable OS-bypass
migration models” section. A comprehensive and an communication. By RDMA, the memory of one com-
exhaustive survey of the state-of-art live VM migration puter can be accessed by another without involving one’s
techniques is done in “Live virtual machine migration operating system. To transfer the VM state traffic socket
frameworks” section. Threats and security requirement in interface and TCP/IP protocol is used in most VM envi-
live VM migration is briefly discussed in “Threats in live ronment. High speed interconnects and RDMA offers
virtual machine migration” section. Specific research gaps high through-put, as a result, memory pages transfer time
and open challenges in Live VM migration are described can be reduced.
in “Research challenges” section. Finally, “Conclusion and The whole machine migration concept is introduced by
future work” section, concludes the paper with future Luo et al. [47], in which VM run-time state including
research directions. memory contents, CPU state, and provided local disk stor-
age is migrated. To minimize the service downtime due
Background to large amount storage, and for maintaining disk stor-
Live VM migration is the technique of migrating the states age consistency and integrity, authors proposed a Three-
(CPU, memory, storage etc.) of VM from one server to Phase Migration (TPM) (pre-copy, freeze-and-copy, and
another server. It is being researched for a decade but still post-copy) algorithm. To easily carry-out migration pro-
some of the issues require further examination and solu- cess at source server they use the Incremental Migration
tions. The evolution, motivation, and components of live (IM) algorithm to bringing down migrating data back to
VM migration are given below. the destination server. Block-bitmap is used, for tracking
of all the write access of local disk while migration is per-
Evolution and motivation formed, this also synchronizes the migration of local disk.
Live migration of OS is a extremely powerful tool for The experimental results show that TPM algorithm is
administrators of CDC’s, by allowing a clean separation performed well when used for I/O intensive applications.
of hardware and software considerations, and consolidat- Also, the downtime of migration is 100 milliseconds equal
ing servers into a single coherent management domain to shared storage migration and performance overhead for
that facilitates load balancing, fault management, resource recording write processes is also low.
sharing, and low-level system maintenance. Sapuntzakis Furthermore, the growth of cloud computing has led to
et al. [44] pointing the user level mobility and manage- establishing numerous CDC’s around the world that con-
ment of the system by migrating the hardware states, sume a huge amount of electrical energy which results
called capsule. For reduction of capsule size, authors in high operational cost and carbon footprints to the
proposed copy-on-write disks, “ballooning”, demand Pag- environment. In recent years, the sole concern behind
ing, and hashing techniques. Authors show using capsule CDC’s deployments is to provide high-performance and
migration, the active applications can be started within 20 availability dwindles, without paying much attention to
min on a 384 kbps network speed. According to them, the data centers energy consumption. As energy consump-
VM migration is a better solution instead of installing the tion increasing continuously, there is a need to focus
application. on resource management to optimizing them for energy
At the initial of the cloud for handling the residual efficiency, while maintaining high-performance. So min-
dependencies at process level is a difficult task, Clark imizing the energy usage of data centers is a challenging
et al. [33] proposed the idea of live VM migration algo- issue because applications and data size are growing very
rithm, which has the capability to move the entire OS. rapidly which require fast servers and large disk storage
Authors report that live migration of VM is transfer- to process service request within the defined time period.
ring the memory image from one server to another. Hence, eliminating any waste of power in CDC’s is very
Authors also introduced the writable working set concept necessary.
and focused on data center and cluster environment and Until recent, the aim of resource allocation policies in
implemented migration support for Xen. a CDC’s is to provide high performance for the fulfill-
Nelson [45] focused on transparent migration system ment of SLA, without considering the energy cost. Based
that can migrate unmodified applications on unmodified on the performance requirements, the VM’s are logi-
OS. They have shown that transferring the memory while cally resized and consolidated to the lesser number of
VM is running, VM experiences less than 1 s of down- servers which leads to reducing energy consumption by
time. Huang et al. [46] proposed Random Direct Memory switching the idle servers to the either sleep mode or off
Access (RDMA) based VM migration to avoid the lower mode. Further, to explore energy and performance effi-
transfer rate and high software overhead problem when ciency, three critical issues must be pointed out like: (1)
VM is migrated over TCP/IP (Transmission Control Pro- power cycling: excessive power cycling of a server could
tocol/Internet Protocol). RDMA access the high speed reduce its reliability; (2) switching among frequencies:
switching resources off in a dynamic environment is a allocation of VM’s to few active servers as much as
critical from the SLA perspective because the frequently possible, VM live migration is a good technique for
changing nature of workload may not fulfill desired Qual- cloud power efficiency.
ity of Services (QoS) due to insufficient number of active 4. Resource sharing: The sharing of limited hardware
servers under peak load; (3) performance management: resources like memory, cache, and CPU cycles leads
ensuring SLA’s brings issues to performance management to the application performance degradation. This
in virtualized environment. Hence, all these issues require problem can be solved by relocating VM’s from
effective consolidation policies which are more energy- over-loaded server to under-loaded server [55, 56].
efficient without compromising the defined SLA. Although, the sharing of resources leads to cut down
The Power-aware Application Placement problem has operational cost because of switch-off the
investigated by Verma et al. [48]. An application place- unnecessary or idle servers [57, 58].
ment controller (pMapper) is used for dynamically place- 5. Online system maintenance: A physical system
ment of applications to minimize power consumption required to be upgrade and service, so all VM’s of
while meeting performance guarantees. Secure energy- that physical server must be moved to an alternate
aware resource provisioning has proposed by Sammy server for maintenance and services are available to
et al. [49]. For server consolidation, VM migration using users without interruption [59].
Dynamic Round Robin algorithm gives a more feasi-
ble solution which reduces energy consumption without Components in live virtual machine migration
compromising on security. Further, Beloglazov et al. [50] At the time of live VM migration, it is essential to know
divide the VM allocation problem into two parts: one about what to migrate or which content must be migrated.
is the admission of new requests for VM provisioning In the migration process, it is essential to observe that
and VM placement on the server and it is treated as how migration process handles CPU state, memory con-
bin packing problem, whereas the second is the opti- tents, and storage contents [60]. CPU state is little bit
mized allocation of VM’s and solve it by modification of information and represents the lower bound of service
the Best Fit Decreasing (BFD) algorithm. In the Modi- downtime.
fied Best Fit Decreasing(MBFD) algorithm sort all VM’s
in there decreasing order of current CPU utilization, and Memory content
allocate them to a server that provides the least incre- Memory content is a larger amount of information, that
ment on server power consumption. So it selects the more incorporate the running processes memory and guest OS
power-efficient server first. memory within the VM. The VM is configured with a
Live VM migration is required to full fill the running large amount of memory, but it may not be fully utilized
application resource demand. It facilitates the following by VM, so no need to transfer unused memory. Also, the
features: compression technique is used to speed up migration pro-
cess. Following are the memory module that needs to be
1. Load Balancing: It required when the load is moved under the process of migration:
considerably unbalanced and impending downtime
often require simultaneous VM (s) migration. It is 1. VM Configured Memory: The amount of actual
used for continuing services after fail-over of physical memory that is given to guest VM by the
components which are monitored continuously then hypervisor. The guest VM used this memory as their
load on host distributed to other hosts and no longer own physical memory.
sends traffic to that host [51–53]. 2. Hypervisor Allocated Memory: It is part of VM
2. Proactive fault tolerance: Fault is an another configured memory and actively used by VM but its
challenge to guarantee the critical service availability size is less than the VM configured memory. If a VM
and reliability. Failures should be anticipated and access this memory and free it, but the decision of
proactively handled, to minimize failure impacts on release of memory depends on the hypervisor.
the application execution and system performance. 3. VM Used Memory: It is currently and frequently
For this different type of fault tolerance techniques accessed through VM OS and all running processes.
are used [54]. These memory pages keep track by the guest VM.
3. Power management: switch the idle mode server to 4. Application Requested Memory: The amount of
either sleep mode or off mode based on resource memory required for running an application and it is
demands, that leads to great energy saving because allocated by guest VM OS. It is not necessary that the
idle mode server consumes 70% of their peak power requested memory is within the physical memory, it
[13], and consolidate the running VM’s to fewer may be in disk storage when all the VM configured
active hosts leads to great energy saving. So dynamic memory is in use.
5. Application Actively Dirtied Memory: It is the computing service providers and live VM migration is
part of the application requested memory which is utilized for effective workload movement which has
frequent access and modified by running application very small service downtime. But this is still a challenge
so it is commonly resident in memory, so avoid to to migrate VM’s between private and public cloud as
swap-out on disk storage. well as different service providers. There is currently no
provision for live migration in or out of a public cloud
Storage content environment [61]. Rackspace could migrate VM’s but it
It is a voluntary part of live VM migration. In LAN con- is cold migration, not live migration. After 2013, Google
nections like cluster and CDC uses NAS storage so no Compute Engine [62] uses the live migration for kept cus-
need to transfer storage contents. If it is not possible tomer VM’s running, while performing software updates,
to transfer disk storage or destination cannot access the fixing out some hardware problems, and recovery from
source disk storage, then a new virtual disk storage needs unexpected issues that have arisen, as shown in Fig. 1.
to be registered on the destination server and finally con- When compute engine migrates a running VM from one
tent needs to be synchronized with source server. The server to another then it migrates the complete instance
storage contents carry a large amount of information need state in a way that it transparent to the end user and other
to be transfer and the full disk image transferred con- who access that VM. The process starts with notification
siderable time while transferring through the network. that VM’s need to be evicted from their current hosting
To reduce the transfer time hypervisor can identify the server. Google’s cluster management software continu-
unnecessary storage contents and unused space to avoid ously tracking these events and schedule them based on
them transfer, that leads to reducing the migration time. data center policies. After the VM selection process for
The different type of storage content that needs to be migration, Google provides a notification to the guest
migrated: that a migration is imminent. On the completion of
the waiting period, a destination server is selected and
1. Virtual Disk Size: The disk size allocated to VM for asked to set up a new, empty VM (target) to receive the
its use is called virtual disk size and its size is defined migrating VM (source). Authentication is performed to
when the VM is created. Generally, hypervisor establish a secure connection between corresponding
recommends choice to avail all the disk space when servers.
VM is created or to dynamically expand based on
storage uses. Types of live virtual machine migration
2. VM Used Blocks: It is the system and user data Pre-copy techniques
blocks, which are stored in VM image. These blocks The pre-copy technique uses iterative push phase that is
are accessed and used by guest VM OS. It is the size followed by stop-and-copy phase as shown in Fig. 2 in
of data actually contains in VM files and it may not the flow chart form. Because of iterative procedure, some
be completely filled by data. memory pages have been updated/modified, called dirty
3. Hypervisor Allocated Blocks: It is actually pages are regenerated on the source server during migra-
allocated space by hypervior to VM for data storage tion iterations. These dirty pages resend to the destination
and its size may be same as virtual disk storage size if host in a future iteration, hence some of the or frequently
pre-allocation is performed. If the VM free some access memory pages are sent several times. It causes long
blocks then hypervior may not shrink the allocated migration time. In the first phase, all pages are trans-
block size because it is harder or not visible for ferred while VM running continuously on the source host.
hypervior to look at VM level storage, it is only In a further round, dirty pages will resend. The second
visible to VM level file system that which blocks are phase is termination phase which depends on the defined
in use and which are free. So avoiding unused space threshold. The termination is executed if any one out of
and garbage collection block could considerably three conditions meet: (i) the number of iterations exceeds
reduce the migration time, but it is not easy by pre-defined iterations, or (ii) the total amount of mem-
hypervior because hypervior implementation not ory that has been sent or (iii) the number of dirty pages
carries garbage collection blocks information. in just previous round fall below the defined threshold.
In the last, stops-and-copy phase, migrating VM is sus-
Limitation of live virtual machine migration pended at source server, after that move processors state
VM’s can be migrated smoothly among corresponding and remaining dirty pages. When VM migration process
servers for meeting the performance parameters, load is completed in the correct way then hypervisor resumes
balancing, and energy saving. Live VM migration is a migrant VM on the destination server. KVM, Xen, and
strong management technique in the multi-VM based VMware hypervisor use the pre-copy technique for live
environment. Cost effective means are offered by cloud VM migration.
Post-copy techniques
In post-copy migration technique, processor state trans-
fer before memory content and then VM could be started
at the destination server. To optimizing live migration
of VM’s, Hines et al. [63] proposed post-copy tech-
nique. Post-copy VM migration technique investigates
demand paging, active push, pre-paging, and Dynamic
Self-Ballooning (DSB) optimization’s approaches for pre-
fetching of memory pages at the destination server.
Post-copy technique variations or post-copy optimiza-
tion approaches:
As compared to pre-copy, post-copy technique reduces Situations in which, pre-copy or post-copy improve
the number of pages transferred and total migration performance: It depends upon workload type and perfor-
time. But, the post-copy technique has more downtime mance goal of migration. Pre-copy would better approach
than pre-copy technique due to migration latency of for read-intensive workload whereas, for write-intensive
page fetching before VM can be resumed on the tar- or large memory workload, post-copy would be better.
get server. Another disadvantage is, if any kind of failure
occurs during the migration then recovery may not be Performance metrics
possible. Table 1 shows a comparison between pre-copy Researchers have suggested various performance metrics
and post-copy technique on the basis of performance in live VM migration and these metrics are affected when-
metrics [63]. ever VM migration take-place. Voorsluys et al. [66] show
Svärd et al. [65] identify the essential characteristics of that service levels of running application could be down
live migration. They investigate, categorize, and compare when it is migrated. So it is very important to migrating
three migration approaches which are pre-copy, post-copy OS with minimal zero time when OS is serving live ser-
and hybrid techniques. In their work, they migrate VM’s vices. They measure the performance of running applica-
with large memory sizes and high dirty rates to find differ- tions VM inside Xen hypevisor during live VM migration.
ences and limitations of the migration approaches. They Kuno et al. [67] assess the performance of live and non-live
conclude that when robustness is essential then use the VM migration. In non live migration, VM stops but there
pre-copy live migration otherwise use either post-copy or is no performance degradation, whereas in live migra-
hybrid migration that reduces the service downtime, total tion VM, process keep running and performance may
migration time and consume fewer resources. degrade. They show that memory writing and host OS
communication are important reasons for performance 3. Pages Transferred: The amount of memory contain
degradation. They measured that migration time and by VM or number of pages transferred during VM
memory size of VM is proportional and in both methods migration, it also includes duplicate pages. Liu et al.
of migration, migration time is almost same. Results show [69] calculate the page transferred at round i,
that live migration provides better performance when VM
is running CPU intensive task and could be better for I/O vmem if i = 0
vi = (3)
intensive application if network speed is high. Xen and d ∗ ti−1 otherwise
VMware products have the technology for live migration, where, vmem : the amount of VM memory
called XenMotion and VMotion respectively. Feng et al. ti−1 : time taken to migrate dirty memory pages,
[68] compare the performances of them and shows that generated during just previous rounds
VMotion generates less amount of transferred data than The elapsed time of VM migration ti at each round
XenMotion and XenMotion perform better than VMo- can be calculated as:
tion in terms of total migration time. Both VMotion and
XenMotion performance degrades in the network because vmem ∗ di
ti = (4)
of network delay and packet losses. The live migration ri+1
techniques give better performance in LAN networks. network traffic vmig during VM migration:
Live VM migration performance can be measured using
n i
following metrics and compared in Table 2: d
vmig = vmem (5)
r
i=0
1. Total Migration Time: It is the summation of all
migrant VM’s migration time. Its value can vary due where, r: memory transmission rate during VM
to the amount of data to be moved during migration migration.
and migration throughput. It depends on 1) the total migration latency tmig is calculated as:
amount of memory transferred from source to
n
destination server, and 2) allocated bandwidth or link tmig = ti (6)
speed. i=0
vm 4. Preparation Time: The time difference between
tm = (1)
b initiation of migration and transferring the VM’s
state to the target server, while continuing its
Where, tm = total migration time
execution and dirtying memory pages.
vm total amount of memory
5. Resume Time: The time when VM migration is done
b = bandwidth
and resume its VM execution at the targeted server.
2. Downtime: It is the time when service is not running
6. Application Degradation: Due to migration the
or available due to migration of processor states.
performance of application is interrupted or slow
Downtime extends because current algorithms are
down services during migration
not able to keep a record of dirty pages of migrating
7. Migration Overhead: There is need of some extra
VM. The downtime td is depends on page dirty rate
machine resources to perform a migration.
d, page size l, duration tn of the last pre-copy round
8. Performance Overhead: Degradation of service
n, and link speed b, Lui et al. [69] define the
performance during migration or interrupting the
downtime as:
service while executing smoothly The migration
d ∗ l ∗ tn process introduce delay, extra logs, and network
td = (2)
b overheads during applications execution on VM.
9. Link speed: It is the most crucial parameter with
Table 2 Factors impacting the metrics respect to the performance of VM. The allocated
bandwidth or capacity of the link is inversely
Performance metrics Factors
proportional to service downtime and total migration
Downtime Synchronization mechanism, time. The faster transfer requires more bandwidth,
low-bandwidth network
hence it takes less total migration.
Amount of transferred data It may be larger than the actual run
time data size because there must be
10. Page dirty rate: It is also the major factor impacting
some redundancy for Synchronization migration behavior. The rate at which VM memory
and protocol pages are updated by VM applications, it depends on
Total migration time Large amount of data (some pages need the number of transferred pages in every pre-copy
to be transferred multiple times because iteration [70]. If the dirty rate is higher than it
of modification)
increases data sent per iteration, leads in increasing
total migration time and service downtime. Dirty migrated to other servers (having a low load) for load bal-
page rate and migrating VM performance are not in ancing. VMware reported that the frequency of running
a linear relationship. If the rate of dirty page live VM migration invoked by automated load balancing
generation is lower than link capacity results in lower functions in the range of 0 to 80 per hour in their data cen-
total migration time and downtime because modified ter, that leads performance degradation of live migration.
pages are sent frequently. Otherwise, migration Furthermore, it is difficult for cloud providers to provide
performance degrades significantly. requested resources to the end users in a timely fashion.
Kikuchi et al. [51] design a PRISM performance model
Migration of a VM, running specific application such a for parallel live migrations. In their work, data collection
memory-intensive, read-intensive or write-intensive. and migration processes are performed simultaneously.
Their model represents performance characteristics of
• If a VM is running memory-intensive applications live migration and that is based on data. The experimen-
than VM migration leads to performance degradation tal setup for performance measurements consists of one
due to network traffic, downtime, and latency. network storage for storing VM images and four phys-
• The pre-copy technique reduces VM downtime and ical machines for VM’s deployment. Fujitsu PRIMERGY
adverse effects on application performance if VM is RX200 S5 physical machines (16-core Intel Xeon X5570
executing the read-intensive application. CPU 2.93GHz, 32-GB RAM, 3-136-GB hard disk drives)
• The pre-copy technique is not performed well if are used as network storage and physical servers and con-
running application is Write-intensive. Because nected by 1GB Ethernet switch. To enable virtualization,
write-intensive application frequently modifies a XenServer 5.6 is installed on these physical servers.
large number of pages that result in dirty pages Cloud support the unprecedented elasticity to dynam-
transferred multiple time. ically grow and shrink resource availability for hosted
applications based on on-demand function. VM migration
Live virtual machine migration models enables the elasticity function and resource usage can be
In this section, we present a brief review on existing mod- dynamically balanced over the physical server that allows
els of live VM migration. The term “model” is used for the applications to be dynamically relocated to improve reli-
theoretical representation of Phases involved in live VM ability and performance. When a VM should be migrated
migration. The models may or may not have been imple- and how the necessary resources should be allocated,
mented. We further propose a generic model of live VM resource availability can help to take this sort of deci-
migration, which considers the required phases of live VM sion. Using regression statistical method Wu and Zhao
migration, based on existing models. [72] proposed a performance model. This model is used
For efficient utilization of CDC resources, frequent live to predict migration latency and able to generate appro-
migration is used, but live migration performance is an priate resource management decisions. They migrate a
issue. For this, reliant evaluation method is required to Xen-based VM. The method represents that the availabil-
select the best optimum software and hardware combi- ity of resources has an impact on migration latency by
nation environments that obtain the best live VM migra- profiling the migration of different types of VM’s that are
tion performance. For this, Huang et al. [71] proposed a highly resource-intensive. Performance model can be used
live migration benchmark – Virt-LM solution. Virt-LM to predict the migration time or at least upper bounds
benchmark is used to compare live migration perfor- for VM’s. For experiment two servers source and desti-
mance on different CDC environments among different nation servers configured with 6-core 2.4GHz Opteron
software and hardware environments. Different types of CPU’s and 32GB of memory. For virtualization Xen 3.2.1
performance metrics, application, stability, compatibility, is installed on Linux version 2.6.24.
usability, and impartial scoring methodology are the main The existing approaches focused on the VM place-
objectives for designing of Virt-LM. To validate effective- ment techniques for performance improvement with
ness of Virt-LM, it is run on two hypervisors - Xen 3.3 defined constraints. Some literature works concern about
and KVM-84 on Linux kernel 2.6.27. For this DELL OPTI- performance and energy cost while handling the VM
PLEX 755 physical machines (Intel Core Quad Q6600 consolidation. CDC’s consume an excessive amount of
CPU 2.4GHz, 2GB RAM, SATA disk) used under test energy. It is accountable for global increase in energy
hardware and connected by single 100 Mbit communica- consumption, and energy cost additionally as a propor-
tion links. tion of IT costs. Further, the migration cost may vary
Using the live VM migration function, cloud service considerably because it depends on the type of workload,
providers can consolidate many VMs with small work- workload characteristics, and required VM configura-
loads into a few servers to achieve high resource uti- tions. While considering the migration overhead during
lization. Also, VM’s with heavy workloads on a server migration decision making, Liu et al. [69], investigate
design methodologies to quantitatively predict the energy Scatter-Gather live migration approach. Scatter-Gather
cost and migration performance. It is based on empirical approach reduces the eviction time by decoupling the
studies and theoretical analysis on Xen 3.4 platform. source and destination server when the destination server
This model represents both energy and performance in is resource constrained. Source server scatters the VM’s
term of VM migration cost. They validate their model memory state to multiple middle boxes in the cluster, at
by conducting some set of experiments. The migration the same time destination server starts gathering VM’s
performance metrics handle several factors like workload memory from the middle boxes using a post-copy vari-
characteristic, VM memory size, memory dirty rate, and ant. The experiments are performed on physical machines
network transmission rate. Experiments are conducted (dual quadcore CPU 1.7GHz, 16GB DRAM, 1Gbps Eth-
on Dell PowerEdge1950 servers (2 Intel quad-core Xeon ernet card) connected via Nortel 4526-GTX layer-2 Eth-
E5450 CPU 3GHz, 8GB RAM, 250GB SATA hard disk) ernet switches. The machines are run Linux kernel 2.6.32
with 1Gbit Ethernet interface. The host machines running and KVM/QEMU 1.6.50 hypervisor with Linux kernel 3.2
on Red Hat 4.1.2 platform, Linux 2.6.18.8-Xen kernel and as guest OS. Results show that Scatter-Gather reduces the
Xen 3.4.1 hypervisor are installed. For power consump- eviction time by up to a factor of 6. It is important for
tion measurement WattsUp Pro [73] is used. The results data center administrator’s toolbox when low VM eviction
show that the proposed model is about more than 90% time is required.
prediction accuracy with respect to measured cost and In the literature, a little work focuses on the cost and
Model-guided decisions considerably reduce the migra- performance interference while handling VM migration
tion cost by more than 72.9% at an energy saving of 73.6%. on corresponding server sides, that leads to SLA violation.
In the cloud-based edge networks number of coop- For such problems, Xu et al. [52] proposed a lightweight
erating VM’s can implement the tasks traditionally per- Interference-Aware (iAware) live VM migration solu-
formed by disrupting and expensive network middle tion. It empirically captures the important relationship
boxes. Cerroni and Callegati [74] proposed a model for between performance interference of VM and major fac-
cloud-based edge network. Their proposed model evalu- tors which are easily accessible in the real environment
ate the performance of live VM migration for co-operating with defined benchmark workloads on a Xen hypervi-
VM’s that implemented a user’s profile. They derived the sor cluster platform. It minimizes both the migration and
function for service downtime and total migration time co-location interference among VM’s, using a demand-
as an expression of network profiling and system design supply model with multi-resource handling. VM’s are
parameters. Service downtime is an end-user quality indi- hosted SPECCPU2006 [76], Hadoop v0.20.203.0 [77], net-
cator and total migration time is purely related to the perf v2.5.0 [78], SPECweb2005 [79], and NASA Parallel
network bandwidth availability of an operating environ- Benchmark (NPB) v3.3.1 [80] respectively to examine run
ment. Authors considered two types of migration schedul- time overheads for mixed workloads. Fifty VM’s in Xen
ing alternatives, namely sequential migration strategy and virtualized cluster and 10 VM’s are assigned to each work-
parallel migration strategy, and results shows that trade- load. Large-scale experimental simulations are conducted
off exist between them. Furthermore, we find the situation for evaluating the performance gain and run time over-
in which parallel migration reduces the migration down- heads in terms of CPU consumption, network throughput,
time with the occupancy of lot of resources. Extension and scalability. Further, the evaluation results are com-
of work could involve a general scenario in their model, pared with traditional interference-unaware algorithms.
that consider a more accurate memory profiling for edge Also, they observed that iAware is more flexible and
network and test whole functionalities on a real system. able to co-operate with traditional VM consolidation or
In live VM migration eviction time (time to evict the scheduling policies in a complementary way. So, the load
VM state from the source server) metrics is proposed balancing and power saving can achieve without affecting
by Deshpande et al. [75]. Eviction time metrics deter- application performance.
mines how fast the source server goes into the offline A fractional hybrid pre-copy migration technique for
mode or the freed resources for further availability to storage and memory migration over WAN is proposed by
other VM’s. The traditional live VM migration techniques Zhang et al. [81], it is a kind of adaptive live migration
like pre-copy and post-copy, treated the eviction time as approach. As the name suggests, a fraction of memory and
the total migration time because both the source server storage is transferred in pre-copy phase. The remaining
and the destination servers are bounded by migration memory and storage contents are transferred through the
duration. Eviction time value is continuously increase if variant of post-copy migration (demand paging). The frac-
the destination server not carrying sufficient memory tion is adjusted that helps to restore the migrating VM’s
or network bandwidth because it affects the receiving performance to its original level. Proposed approach high-
speed of incoming VM traffic, in such situation source lights the VM’s migration over WAN, where the storage
server is also tying up. For such problem, they proposed a content migration is a critical research issue. Whereas,
the storage migration over LAN is often required because due to resource unavailability during VM state transfer
it is shared between the corresponding servers. They duration. Huang et al. [71], Kikuchi et al. [51], Wu and
develop a probabilistic prediction model and profiling Zhao [72], Liu et al. [69], Cerroni, and Callegati [74],
framework to adaptively find storage and memory frac- Xu et al. [52], Deshpande and Keahey [82] proposed
tions to migrate. Two physical machines (Intel Core2 Duo a performance aware models whereas Deshpande et al.
E6750 CPU 2.66GHz, 2GB RAM) with Linux 3.3.4 OS [75] highlight the eviction time issue, Zhang et al. [81]
on both host and guest OSes. The Xen 4.1.2 hypervisor highlight memory and storage transfer over WAN net-
is used as memory management and QEMU is modified work issue. Huang et al. [71] proposed a live migration
at the backend. Experiment is emulated on WAN net- benchmark – Virt-LM solution. Virt-LM benchmark is
work and results show that the fractional hybrid pre-copy used to compare live migration performance on different
migration solution achieves significantly improved adap- CDC environments among different software and hard-
tiveness than others while maintaining the responsiveness ware environments. Kikuchi et al. [51] design a PRISM
of post-copy algorithms. performance model for parallel live migrations. Wu and
The network contention between the VM application Zhao [72] proposed performance model is used to pre-
traffic and the migration traffic is a critical issue while dict migration latency and able to generate appropriate
dealing with the live VM migration. When VM migra- resource management decisions. Liu et al. [69] investi-
tion is processed with pre-copy, then the VM continues gate design methodologies to quantitatively predict the
running at source server whereas in post-copy VM is energy cost and migration performance. The migration
continuously running at destination server. VM migra- performance metrics handle several factors like workload
tion applications with a predominantly outbound traffic characteristic, VM memory size, memory dirty rate, and
deal with outgoing migration traffic at the source server network transmission rate. Cerroni, and Callegati [74]
whereas in predominantly inbound traffic deal with the proposed a model for cloud-based edge network to evalu-
incoming traffic at the destination server. Therefore net- ate the performance of live VM migration for co-operating
work contention increases the total migration time and VM’s that implemented a user’s profile. Xu et al. [52] pro-
degrades the application performance. For the same issue, posed a lightweight Interference-Aware (iAware) live VM
traffic-sensitive live VM migration model is proposed by migration solution. It empirically captures the important
Deshpande and Keahey [82], for a reduction in network relationship between performance interference of VM and
contention of migration traffic and VM traffic. They used major factors which are easily accessible in the real envi-
a hybrid technique for the migration of the co-located ronment. Deshpande and Keahey [82] proposed a traffic-
VM’s (VM’s that are located on the same server), besides sensitive live VM migration model to reduce network
work with any one of pre-determined VM migration tech- contention of migration traffic and VM traffic. Deshpande
nique for migrating all the VM’s. Authors use the network et al. [75] proposed eviction time metrics determines
traffic profiles by which complements the direction of how fast the source server goes into the offline mode or
most VM application traffic, that provides the base of the freed resources for further availability to other VM’s.
migration technique selection. They implemented it on Zhang et al. [81] develop a probabilistic prediction model
KVM/QEMU platform. Authors compared their traffic- and profiling framework to adaptively find storage and
sensitive migration technique with pre-copy and post- memory fractions to migrate over WAN network.
copy VM migration. Two host (16 GB RAM and 8 CPU’s)
are deployed with one VM (5 GB RAM and 2 vCPU’s) Generic model of live virtual machine migration
each. First VM executes a Netperf [78] client and sends a The generic model of live VM migration is shown in Fig. 5.
TCP stream to other VM that running a Netperf server. It includes different steps that are required while taking
The results show that their approach reduces network migration decision. Due to the need of load balancing and
contention for migration, that leads to reduces the total server consolidation, migration of some or all VM’s from
migration time and adverse effects of migration on appli- servers is required to migrate. For this, we should select
cation VM’s performance. For further extent of traffic- the most appropriate VM or set of VM’s, which meet the
sensitive migration, VM’s from single source migrated migration objective or selection criteria. For this, we first
towards different destination servers. During consolida- measure the each VM memory dirty rate from current and
tion, VM’s from several source hosts are migrated to fewer historical page access pattern. Then, controller adjusts
destination hosts. Similarly, VM’s are scattered to more the memory page transmission rate to adapt the dirty
hosts to meet their increasing resource requirement. rate. After this, performance prediction model estimates
Comparison of above-mentioned models is illustrated in the performance of VM based on performance metrics
Table 3. like migration time, migration cost, down time, amount
For efficient utilization of CDC resources, frequent live of data transfer, etc. Finally, migration decision is taken
migration is performed, but VM performance is an issue based on performance metrics to decide which VM/VM’s
need to migrated. The historical data of VM’s is updated 2. Multiple VM migration: Two or more VM’s are
for further migration (if require). migrated simultaneously.
3. Single & Multiple VM migration: One or more VM’s
Live virtual machine migration frameworks are migrated simultaneously.
The existing live VM migration frameworks are dis-
cussed and compared in this section. The term “frame- In the following sub-sections, we categorize the exist-
work” is used for practical implemented techniques. The ing works in two major types multiple VM migrations and
frameworks involved the proposed techniques and their single & multiple VM migrations. We also describe the
implementation. generic situation from the type of VM migration.
System (NFS). Three performance metrics namely total bandwidth and higher dirty rate. Lu et al. [85] proposed
migration time, service downtime, workload performance a migration optimization approach called “Clique migra-
overheads, are considered to measure the efficiency. tion”. This approach divides a large group of VM’s into sub-
The problem of live gang migration (a group of co- groups based on the VM’s inter-dependency and shuffling
located VM’s are migrated simultaneously) is addressed by mechanism is used to decide the order in which VM’s sub-
Deshpande et al. [84]. Authors present the design, imple- group should be migrated. They proposed R-Min-Cut and
mentation, and evaluation of a de-duplication approach Kmeans-SF algorithm as a solution of their research prob-
(at the page level and sub-page level) for concurrent lem. R-Min-Cut algorithm is based on a greedy strategy
VM’s migration. For detail, proof-of-concept prototype that implies a chronological order. Whereas in k-means,
de-duplication strategies and a differential compression the clustering is a static process, so the shuffling process
technique are implemented to exploit content similarity is required. The optimized k-means algorithm with shuf-
across VM’s. The identical memory pages of VM’s are fling is called as Kmeans-SF algorithm. The experiments
transferred only once during the migration process. They are performed on physical machines (Intel(R) Xeon(R)
implemented it by modifying an existing single-VM pre- CPU E5-2630 2.30GHz, 64GB RAM, 500 GB hard drives),
copy migration in the QEMU/KVM environment. Offline CentOS Linux kernel 2.6.32, QEMU/KVM hypervisor is
implementation of de-duplication based gang migration is installed and machines are connected via 1Gbps band-
processed using Linux 2.6.32 OS and QEMU/KVM-0.12.3 width. Results show that proposed algorithms can reduce
hypervisor at both source and destination machines. Their inter-cloud data traffic (traffic trace of 68 VM’s at IBM
approach achieves a considerable improvement in both production cluster) by 25 to 60%, if the parallel migration
network traffic and total migration time. is varied from 2 to 32 and also considerably reduce the
If VM’s collaborating on a module of application are period in which applications undergo performance degra-
segregated in geographically distributed clouds, then dation. Furthermore, the migration traffic and application
the inter-cloud communication latency and low network traffic have considerable interference, and it is higher
bandwidth over WAN network will considerably degrade when memory dirty rate of migrating VM is higher and
the system performance. The solution of such problems also running application is I/O-intensive.
is to migrate all of the VM’s of a module in a con- The multi-tier application holds a set of inter-dependent
current manner, that eliminates WAN communication VM’s, the live migration of these VM’s needs a careful
latency. But, if the module is large, then it is not easier scheduling, so they require multi-VM migrations instead
to simultaneously migrate all the VM’s due to limited of single VM migration. By observing different types of
multi-tier applications, Lu et al. [86] suggested that ded- the failure rate of the transmission network. Mainly the
icated link at the data center, uses different migration memory-intensive live migration technique either use
approaches that diversely impact the application perfor- pre-copy or post-copy which are already implemented in
mance. This happens due to the inter-dependence among Xen and KVM hypervisors. The proposed improved serial
functional modules of a multi-tier application. They take migration strategy and m mixed migration strategy for
observations on vHaul, which controls multi-VM migra- multiple VM migration can be implemented using Xen or
tions to figure out the optimal scheduling. For evaluating KVM. The queuing models are used for analyzing perfor-
the migration scenario by choosing simple applications mance metrics like average waiting time, blocking ratio,
(client-server architecture applications) running on 2- average waiting queue length, and average queue length of
VM’s and complex multi-tier applications (Apache Olio, each migration request.
a web 2.0 benchmark [87]) running on 4-VM’s. Evalua- Multiple VM Migration becomes very complex due to
tion is performed on physical machines (quad-core Intel many reasons like insufficient resources at destination
Xeon CPU’s 3.2GHz, 16GB RAM) Linux 3.2 OS in both server and concurrent migration of VM Ye et al. [83]
Dom0 and VM’s, Xen 4.1.2 hypervisor is installed. All present a framework based on resource reservation. Fur-
the machines are connected via two separate Gigabit Eth- ther, the problem of live gang migration (a group of co-
ernet links. Their results of the vHaul system indicate located VM’s are migrated simultaneously) is addressed
that it can suggest the optimal multi-VM live migration by Deshpande et al. [84]. Lu et al. [85] proposed a migra-
schedules. Also, their evaluation results show that migra- tion optimization approach called “Clique migration” to
tion schedule generated by vHaul system performed well address the Large module migration problem, it is not eas-
than worst-case schedule by 52% in terms of application ier to simultaneously migrate all the VM’s due to limited
throughput. Furthermore, the optimal schedule consid- bandwidth and higher dirty rate. By observing different
erably minimize downtime up to 70% during migration. types of multi-tier applications, Lu et al. [86] suggested
Though the prototype of vHaul is built using pre-copy live that dedicated link at data center, uses different migration
migration technique on Xen hypervisor and it is portable approaches that diversely impact the application perfor-
to other hypervisors. mance. This happens due to the inter-dependence among
During the server overload conditions, there may be a functional modules of a multi-tier application. During the
chance of SLA violation, to resolve this issue VM migra- server overload conditions, there may be a chance of SLA
tion is performed for balancing the active servers load. For violation, to resolve this issue VM migration is performed
this Forsman et al. [88] proposed push and pull algorithms for balancing the active servers load, for this Forsman et al.
to perform necessary VM migrations. The push phase [88] proposed push and pull algorithms to perform nec-
is active when the host gets overloaded and migrates essary VM migrations. Sun et al. [90] proposed improved
the optimum number of VM’s. The pull phase is active serial migration strategy and introduced post-copy migra-
when server gets under-loaded and hosting more VM’s tion into it. They proposed the M mixed migration strat-
to achieve efficient utilization of resources. The authors egy which uses the improved serial migration strategy
discovered that both the strategies are a complement to with parallel migration. Hence all these approaches work
each other, so each strategy is come out as “best” under for multiple VM migration issue and present sub-optimal
different conditions. Evaluation of proposed algorithm is solutions with different characteristics of VM and migra-
performed on OMNeT++v4.3 [89] simulator using a sim- tion environment. Hence we conclude that all these works
ulation testbed. The results show that adding or removing are based on multiple VM migration but use dissimilar
the number of VM’s, “best” strategy can able to re-balance approaches and techniques to propose optimal solutions.
the system in 4-15 min.
VM migration can help in improving the resource uti- Single & multiple VM migrations:
lization, QoS parameters whereas reducing the power In some specific circumstances like bandwidth aware
consumption from providers perspective. In the literature, VM migration, both single and multiple VM migration
most researchers focus only on single VM migration using approaches are needed. Comparison of Single & Multiple
either post-copy or pre-copy migration, only a few focus VM migration shown in Table 5.
on multiple VM migration problem. Sun et al. [90] pro- Memory-Intensive applications performance is highly
posed improved serial migration strategy and introduced affected when migration is performed by pre-copy
post-copy migration into it. They proposed the M mixed because the memory dirty rate is higher than the mem-
migration strategy which uses the improved serial migra- ory transfer rate. For such applications, post-copy VM
tion strategy with parallel migration. Also, authors have migration pattern performs better. Shribman et al. [91]
developed M/M/C/C and the M/M/C queuing models proposed an approach that considered VM migration
there are C service channels, and the system can service over LAN links. Authors present a XOR Binary Zero
up to C customers. The proposed approaches also handle Run Length Encoding (XBZRLE) and Least Recently Used
(LRU) page recording that supports high dirty rate rela- performance of a federated cloud network but some of
tive to available network bandwidth. This approach used the hypotheses used to derive a model may not be fully
the Remote Direct Memory Access (RDMA) stack to realistic.
reduce the latency of faulty memory pages. It also uses Memory-Intensive applications performance is highly
the pre-paging approach to reducing VM memory size affected when migration is performed by pre-copy
and fasten the VM migration process. Furthermore, to because the memory dirty rate is higher than the mem-
increase the application performance, Memory Manage- ory transfer rate. For such applications, post-copy VM
ment Unit (MMU) is linked to post-copy so the only migration pattern performs better. Shribman et al. [91]
threads waiting for faulty pages are paused while oth- proposed an approach that considered VM migration over
ers can continue their execution. MMU can enable Linux LAN links. Whereas to evaluate the performance for inter
strength of directly handles the faulty page at kernel space cloud federation, Cerroni [92] presented a model and
by swapping disk pages without any context switching at assume that the network load is increased by a group
user-mode. For the implementation of pre-copy approach of co-operating VM’s live migration that continuously
and post-copy approach; VM’s (4-vCPU; 1 GB RAM) that provides services to end-users.
are hosted on physical machine (2 cores, 8 GB RAM) con-
nected with 1 Gbps Ethernet network. Limit the network Generic steps of Single/Multiple VM migration
bandwidth to only 30 MB/s or 240 Mbps for maintain- The key observation behind multiple VM migration is
ing a high application-dirty-rate / network-transfer-rate that VM’s having the same OS, applications, or libraries
ratio. In hybrid post-copy evaluation guest VM (2-vCPU; can contain a considerable number of identical pages. So
4 GB of memory; 1 GB Google SAT working set; 1 Gbps during the multiple VM migration track, all the identi-
Ethernet network between hosts). Introduce the modifi- cal pages across co-located VM’s and transfer the single
cation on QEMU and on KVM hypervisor. The proposed copy of these identical pages, at source server side this
approach considerably improved application performance is done by migration controller. Hence, migration con-
like total migration time, optimized downtime, and appli- troller initiates the parallel migration of all co-located
cation degradation time using optimization strategies. VM’s to the target machine. At the target server, migration
To evaluate the performance for inter CDC for cloud controller prepares the server for reception of incoming
federation, Cerroni [92] presented a model. They assumed migrant VM’s. Multiple VM migration steps are illustrated
that the network load is increased by a group of co- in Fig. 7.
operating VM’s live migration that continuously provides
services to end-users. After characterizing the VM’s into Duplication based VM migration
groups, calculate the migration time for both sequential During the process of migration, VMM detect multiple
and parallel migrations. An analytical model is proposed copies of the same page on single VM or Multiple VM’s
and it is an useful designing tool to dimension the inter- or on a number of different servers, that leads unneces-
DC network capacity for achieving given performance sary memory pages migration. For handling a large num-
level by assuming some simple multi-VM live migration ber of pages during migration requiring more network
strategies for implementation. Represent that sequential bandwidth or increase network traffic. Different type of
VM migration strategy has less detrimental effect on net- memory compression techniques is used.
work performance, whereas parallel VM migration strat-
egy has lower service downtime. This model can be used 1. Replication based
to represent the trade-off between service availability and 2. De-duplication based
inter-cloud data center network capacity. The obtained 3. Redundancy based
results give an interesting insight to the macroscopic 4. Compression based
In the following sub-section, we categorize the exist- cloud, the cluster is composed of a number of servers
ing work into four categories like replication based, de- having Linux OS Debian 5.0lenny on one cloud servers,
duplication based, redundancy based and compression Ubuntu 8.10 Intrepid Ibex on another cloud servers and
based VM migration based on the memory pages replica, KVM hypervisor is installed for virtualization. The CIC
similarity, and volume. methodology able to improve the relocation cost of VM
disk-image because data transferred significantly reduces
Replication based VM migration: when the number of live migrations increases over a large
The same memory page is spread on multiple servers scale federated cloud.
for simultaneous computing and fault (storage and net- When a large size VM is migrated over WAN networks
work) recovery. Comparison of existing techniques based with low bandwidth causes a complex live migration. Cur-
on memory page replication is illustrated in Table 6. rent techniques do not efficiently deal with such migra-
During the migration process, either hot or cold migra- tions where servers are part of different networks. There
tion technique is used that imply the movement of VM are various challenges such as migrating network and
between corresponding servers consume server resources storage connections, migrating storage content and per-
and network bandwidth consequently increases the cost. sistent state kept at the source server side. To minimize the
So to reduce such costs Celesti et al. [93] proposed a Com- migration latency’s Bose et al. [94] proposed a technique
posed Image Cloning (CIC) methodology and focuses which combines VM replication with VM scheduling.
on the dynamic VM allocation. Instead of considering They use the de-duplication method for finding the VM
VM as a single monolithic disk-block, they treated it replicas to compensate the additional storage requirement
as a “composable” blocks and “user data” blocks. They generated by the increasing number of replicas of different
setup two different distributed (federated) clouds one is VM’s images. Develop a CloudSpider architecture based
located at the University of Messina having Dual-Core on efficient replication strategies using VM replication
AMD Opteron Processor 2218 HE with 8 GB of RAM and scheduling. The replica placement strategies evalu-
servers and second is located in the same metropoli- ated on CloudSim simulator with physical machines (Intel
tan area having same hardware configuration. On each Core2Duo CPU 2.53 GHz, 2 GB RAM). The proposed
architecture is capable to minimize migration latency’s mentioned work is work on different challenges while
associated with the live migration of VM images over using the replication technique.
WAN network.
Live VM migration across high latency low bandwidth De-duplication based VM migration:
WAN within “reasonable” time is nearly impossible due Identify the similar memory pages on single VM and
to the large size of the VM image. So, migration of virtual avoid transfer of such pages for improving the efficiency
disk file at run time within acceptable time over WAN is of bandwidth utilization. Comparison of existing work
a critical challenge. Bose et al. [95] proposed a combined that used the de-duplication approach during migration is
VM replication and scheduling architecture called Cloud- presented in Table 7.
Spider. The VM image is replicated across different cloud Live migration is expensive to use because the large
sides is chosen a VM image replica based on dynamically amount of data transfer when migrating the Virtual Clus-
changing cost parameter and treated as a primary copy. ters (VC). As VM’s running on similar OS that indicates
Further, the incremental changes in VM replica is prop- the large portion of storage carries identical data. To
agated towards remaining replicas for synchronization. migrate the VC over WAN network Riteau et al. [37] pro-
Authors proposed de-duplication techniques to compen- posed a VM migration approach called “Shrinker” that
sate (by exploring commonalities) the additional storage based on the de-duplication optimization model. It calls
cost due to additional storage requirements for replica a service that record memory pages identified at source
storage. They mainly focus on the VM image replica place- cluster before transferring them to the destination server.
ment when disparate VM images carried varying degrees Hypervisor uses the service for fetching the status of
of commonality and latency requirements. Modify open- memory pages before transferring them to the destination
source cloud simulator called CloudSim and incorporat- server and it transfers memory page identifier only when
ing modules on storage de-duplication, storage blocks, any of the VM transferred memory page. If the mem-
and file allocation table. Implementation shows that the ory page has not been sent, then the hypervisor registers
success of CloudSpider to minimize storage requirements the memory identifier and transfers the page to the des-
is highly dependents on the working of replica place- tination server. At destination side, distributed content
ment algorithm and it can judiciously place the VM addressing approach is used for transferring pages to the
image replicas at different sites that minimize the storage corresponding destination server(s). Also at the destina-
requirement. tion side, the index server keeps the record of IP address
The movement of VM between corresponding servers of legitimate source server(s) against memory page hash
consume server resources and network bandwidth con- values, prior to transferring the memory pages to the des-
sequently increases the cost. So to reduce such costs, tination server. The destination server registers the source
Celesti et al. [93] proposed a Composed Image Cloning server with respect to the page hash value at the index
(CIC) methodology and focuses on the dynamic VM allo- server when the required memory pages are received. Due
cation. Bose et al. [94] proposed a technique to minimize to all the above process, total migration time and amount
the migration latencies by combining VM replication with of data transferred of the proposed approach are reduced.
VM scheduling. Migration of virtual disk file at run time Also, the process is managed by a centralized server that
within acceptable time over WAN is a critical challenge. may be a single point of failure of the entire environ-
Bose et al. [95] proposed a combined VM replication and ment. The results are analyzed and performed on the
scheduling architecture called CloudSpider. Hence all the Grid’5000 [96] testbed and implementation is performed
on the KVM 0.14.0-rc0 hypervisor. Redis version 2.2.0-rc4 reduction from the providers and CDC operator perspec-
[97] is used to store key-value for indexing and coordi- tive. Jo et al. [99] present a technique to reduce the total
nating the services. The results show that proposed work migration time while keeping the minimum downtime by
reduces both total data transferred and total migration tracking the VM’s I/O operations with NAS device and
time. Another similar technique is proposed by Zhang maintaining an updated memory pages mapping. Under
et al. [98], exploit VM self-similarity ratio and hashing- the process of iterative pre-copy migration the memory-
based finger print to identify and track identical memory to-disk mapping is sent to the destination server, then
pages. directly fetch the required pages from NAS device and
Most of the existing techniques focus on either opti- consistency is maintained by keeping a version number of
mizing the migration performance of single VM or each transferred page. By running the number of bench-
multiple VM’s running on the same server to lessen the mark workloads on a Linux HVM guest (contain Xen 4.1
amount of data transferred among corresponding servers. virtualization environment), the 30% reduction of the total
Deshpande et al. [34] present an Inter-Rack Live Migra- migration time and 60% reduction for certain benchmark
tion (IRLM), for optimizing the performance of multiple workloads.
VM’s migrations, i.e., concurrently migrating multiple As VM’s running on similar OS that indicates the large
VM’s from one rack of the server to another rack. They portion of storage carries identical data. To migrate the
employ de-duplication for improving the efficiency of VC over WAN network Riteau et al. [37] proposed a VM
bandwidth utilization through migration of multiple migration approach called “Shrinker” that based on the
VM’s. Simultaneous de-duplication identifies the similar de-duplication optimization model. Deshpande et al. [34]
memory pages using QEMU/KVM thread and transfer present an IRLM, for optimizing the performance of mul-
them only once by any one of the VM. During mass VM tiple VM’s migration, i.e., concurrently migrating multiple
migration, it reduces the traffic load by a distributed VM’s from one rack of the server to another rack. Jo et al.
replica of VM’s memory. The implementation is per- [99] present a technique to reduce the total migration time
formed on QEMU/KVM virtualization platform and eval- while keeping the minimum downtime by tracking the
uate it on a cluster testbed (13 physical servers (two Quad VM’s I/O operations with NAS device and maintaining an
core 2GHz CPU’s, 16GB RAM, and 1Gbps Network card) updated memory pages mapping.
connected by Gigabit Ethernet connection. The primary
work is performed on 6 servers per rack and 4 VM’s per Redundancy based VM migration:
server, IRLM can reduce the amount of data transferred Having identical memory blocks belonging to different
over the core links during migration by up to 44% (and VM’s on the same host or large blocks consisting of zero
total migration time is reduced by up to 26%) with respect bytes entries. The avoidance of transferring redundant
to online compression and by up to 17% (and total migra- pages leads to reducing power consumption, load and cost
tion time is increased by 7%) compared to gang migration. of live VM migration. Comparison of existing Redundancy
But the proposed framework is computationally expensive based VM migration approaches is presented in Table 8.
and complex because of huge calculations Incorporated At the WAN level, VM migration transforms the scope
like calculation of 160 bit hash value. So the acceptance of resource provisioning from single to multiple data cen-
of proposed framework is limited to servers, that hosting ters. Wood et al. [100] proposed a CloudNet framework to
identical VM’s or workloads. In contrast, another work achieve the live migration and flexible placement of VM’s
Deshpande and Keahey [82] used both pre-copy and and data over a seamlessly connected resource pool (pro-
post-copy VM migration for lessening the mutual adverse vided by different CDC). It provides an optimized support
effects of migration traffic and VM application traffic. for live migration over WAN and beneficial over low
Live migration of VM’s at distributed servers is bandwidth network links. Authors try to reduce the vol-
important for maintenance, load-balancing, and energy ume of data transfer over the WAN by avoiding redundant
memory pages. If the redundant pages are encountered amount among corresponding servers. Jin et al. [102]
only a hash is transferred to destination server then it provide a Memory Compression (MECOM) based
performs lookup operation of the redundant page on the solution to reduce the migration time. The memory
previously received memory page. The advantage of using is compressed and sent over the network using Xen’s
hash in place of compression is lower overheads. Also, pre-copy and stop-and-copy phases. MECOM approach
the cost of transferring VM memory and storage contents provides fast, stable VM migration, while slightly affecting
during migrations can be minimized. For implementing the VM performance. An adaptive zero-aware compres-
a prototype of CloudNet, Xen virtualized environment, sion algorithm (use the memory page characteristics) is
Distributed Replicated Block Device (DRBD) protocol, designed for balancing the cost and performance of VM
and commercial router based VPLS/layer-2 VPN. There- migration. The memory pages are compressed in batches
sults show that memory migration time reduced by 65%, on the source side and recovered at the destination in
memory transfer saved 20GB of bandwidth for storage, same order. The results show that inherent redundancy in
this improvements leads an overheads reduction by less memory areas (like identical memory blocks belonging to
than 20%. different VM’s on the same host or large blocks consisting
In existing system distance based load, consideration is of zero byte entries) to get high compression ratios. The
absent where as the proposed system is based upon it. experiment is conducted on several identical servers
Jaswal and Kaur [101] proposed a technique for offload- (2-way quad-core Xeon E5405 CPU’s 2GHz, 8GB DDR
ing the data of a VM to multiple data centers. They RAM) and Redhat Enterprise Linux 5.0 at the host OS
used the concept of distance and redundancy elimina- and guest OS. Compared with Xen 3.1.0, expanded Xen
tion mechanism has been used. It is an enhanced hybrid 3.1.0 can reduce downtime by 27.1%, total transferred
approach which is used for reducing power consumption, data by 68.8% and total migration time by 32% on aver-
load and cost of live VM migration. This proposed system age. Therefore, a VM that carry large memory size may
combines the reliability of both pre-copy approach and contain more identical pages than a VM with smaller
post-copy approach. The Proposed scheme shows effi- memory size. They further expand their work in Jin
cient results when compared with the existing techniques. et al. [103] and present a VM migration approach based
In the proposed technique, migration will be performed on MECOM approach. To provide live migration for
by the use of live VM migration which means that the para-virtualized VM’s, they used MECOM approach. In
migration does not require the switching off the devices, this approach, VM services may be a little bit affected
which is the case in offline migration. The implementa- based on the characteristics of memory pages. For balanc-
tion is performed using Cloudsim simulator with 2 VM’s ing the performance and cost of VM migration, authors
of 4 physical machines ( Intel(R)Core(TM) i3 CPU M330 proposed an adaptive zero-aware compression algorithm,
@2.13GHz, 3.00 GB, 64-bit OS). The comparison is based in which pages are more quickly compressed in batches
on load among Pre-copy (500 Hz per 1 GB of Data), Post- on the source server and recover at the destination server.
copy (550 Hz per 1 GB of Data), Hybrid (425 Hz per 1 Hence the intent of this approach to implement live
GB of Data) and Proposed algorithm (201 Hz per 1 GB of migration of VM’s including the local persistent state.
Data). Power consumption is also reduced in the proposed The experiment is conducted on Xen 3.1.0 virtualized
system from 180w to 100w. environment that is deployed on several identical servers
Wood et al. [100] proposed a CloudNet framework to (2-way quad-core Xeon E5405 CPU’s 2GHz, 8GB DDR
achieve the live migration and flexible placement of VM’s RAM) and Redhat Enterprise Linux 5.0 at the host and
and data over a seamlessly connected resource pool (pro- guest OS. Authors compared their proposed approach
vided by different CDC). Jaswal and Kaur [101] proposed and algorithm expanded Xen 3.1.0 hypervisor, and comes
a technique for offloading the data of a VM to multi- out with total migration time, downtime, transferred data
ple data centers. They used the concept of distance and have reduced by 32%, 27.1%, and 68.8% respectively. But
redundancy elimination mechanism has been used. due to low bandwidth availability at WAN, it is still a
challenge to fast migrate VM’s using MECOM approach
Compression based VM migration: because VM’s having huge amount of memory and
Memory compression leads to reducing the data transfer disk data.
amount during migration process. Using the compression, The large scale application systems like Systems, Appli-
cost of transferring VM memory, storage contents dur- cations and Products (SAP) in Data Processing for Enter-
ing migration, and service downtime get reduced. Com- prise Resource Planning (ERP) consume a large amount
parison of existing Compression based VM migration of memory which results in limiting the VM migration.
approaches is presented in Table 9. For this Hacking and Hudzia [104] present a system that
Number of research work is carried out for improving supports transparent migration of large scale applications
the live VM migration with respect to the data transfer without severely affecting their live services. They used
the delta compression approach for data compression that of changes with versions. So, performance is improved
leads to reducing the data transfer amount during migra- by reducing the page dirty rate or through increased net-
tion and also added an adaptive warm-up data transfer work throughput. Also, compression/decompression of
phase. The experiment performed using two identical VM memory pages consumes extra resources. The tests
servers (HP ProLiant DL 580, 4x 3Hz Dual core Xeon, performed on two physical machines (Intel 2.66 GHz
32GB RAM, Debian 4.0 64 bit) and use the KVM hyper- core2quad, 16 GB RAM, Ubuntu 9 OS, QEMU/KVM
visor. Results show that data transfer is increased and ser- 0.11.5 Virtualization environment) is used in the evalu-
vice downtime is reduced without introducing additional ation. The evaluation shows that XBRLE compression is
service disruption and performance overhead compared beneficial with a highly compressible working set or over
to current live migration. slow networks (i.e., WANs) or running heavy workloads
Another work that attempts to improve overall network with large working sets on the VM’s.
performance is done by Svärd et al. [36]. Authors used By migrating CPU and/or memory intensive VM’s two
the delta-based compression method in order to increase problems occur: one is extended migration downtime that
migration throughput and reduced service downtime. may result in VM failure or service interruption, and
They proposed a binary XOR-Based Run Length Encod- second is prolonged total migration time that is harm-
ing (XBRLE) delta compression method for improving ful to the overall system performance because consid-
migration performance. The Run Length Encoding (RLE) erable network resources allocated to complete the VM
compression approach combines compressed delta pages migration. In long distance migration, these problems
for optimizing the network bandwidth utilization. The become more severe if the available network capacity
reverse process is applied at destination server and VM is low. Another work based on RLE compression and
memory pages are fetched using decompression method. dynamic reordering proposed by Svärd et al. [105], where
The modification is done to the KVM hypervisor. They the authors optimize total migration time and service
show that whenever the VM’s migrated with high work- downtime through improved network performance. The
loads or low-speed connectivity then there is a high risk of proposed VM migration techniques dynamically reorder
service dis-connectivity. The data is recorded in the order the memory pages when migration is under a process that
reduces the re-transmission of likelihood page. Authors a resource-intensive process, delta compression approach
assign a page weight to memory page based on the fre- affects the performance of co-hosted application because
quency of page has been modified during migration pro- of high resource sharing. Also, HMDC may face the sys-
cess and transfer these page according to page weight. tem crash due to a power outage during migration phase,
Consequently, under the migration process, lower weight so it is not robust migration scheme. ERP test case is per-
memory pages get higher priority or sent earlier than formed on two machines (Intel 3GHz 4xDual Core Xeon,
higher weight. As a result, instead of transferring the 32 GB RAM, Ubuntu 10.4 OS, Linux kernel 2.6.32-24, Xen
memory pages at equal priority, this approach transfer 3.4.4) and 2-way set-associative cache size is 1 GB. Other
the delta compressed pages with weight based priority test cases are performed on two machines (Intel 2.66
approach that reduces total downtime, migration time GHz core2quad, 16 GB RAM, Ubuntu 9 OS, Linux kernel
and increase migration throughput. The implementation 2.6.32-24, Xen 3.4.4) and 2-way set-associative cache size
is performed on two machines (3.06 GHz HP G6950, 8 GB is 512 MB. Implementation results show that HMDC evi-
RAM, KVM 0.13.0). The critical issue with this approach dently reduces VM downtime, total migration time, and
is that requires large cache memory and more CPU cycles total migration data compared to XBRLE and Pre-copy
for compressing memory pages. approach.
Hybrid VM migration technique is proposed by Sahni Jin et al. [102] provide a MECOM based solution to
and Varma [106], exploits a hybrid technique which is reduce the migration time. MECOM approach provides
a combined method of pre-copy and post-copy migra- fast, stable VM migration, while slightly affecting the VM
tion over Ethernet links. In this technique VM migration performance. Whereas the large scale application sys-
technique, is worked in three phases such as preparation tems like SAP in Data Processing for ERP consume a
phase, downtime phase, and resume phase. In the prepa- large amount of memory which results in limiting the
ration phase, “access bit scanning” method is used to iden- VM migration. For this Hacking and Hudzia [104] present
tify the working set of VM (frequently access memory) a system that supports transparent migration of large
and introduce flags in the page table that indicates the scale applications without severely affecting their live
frequently accessed pages. In the next phase, to resume services. Another work that attempts to improve over-
VM at destination server the CPU register’s status along all network performance is done by Svärd et al. [36].
with working set is migrated. After that, to reduce the Authors used the delta-based compression method in
network page faults, hypervisor actively pushes the VM order to increase migration throughput and reduced ser-
memory pages from the source server. Also, the adap- vice downtime. Another work based on RLE compression
tive pre-paging approach is optimized by increasing the and dynamic reordering proposed by Svärd et al. [105],
search space against faulted pages. The implementation where the authors optimize total migration time and ser-
of the prototype in KVM/QEMU. Moreover, Lempel– vice downtime through improved network performance.
Ziv–Oberhumer (LZO) compression technique [107] is Hybrid VM migration technique is proposed by Sahni
used for compression of memory pages before mem- and Varma [106], exploits a hybrid technique which is
ory page transfer. Their proposed technique significantly a combined method of pre-copy and post-copy migra-
improved application total migration time, service down- tion over Ethernet links. The HMDC approach proposed
time, and the amount of total data transfer. Therefore, by Hu et al. [28], based on delta compression. HMDC
applying compression/decompression method consumes approach has used active-push and on-demand paging
considerable system resources [102]. approaches for improving the VM memory transfer rate.
The Hybrid Memory Data Copy (HMDC) approach Jin et al. [103] present a VM migration approach based
proposed by Hu et al. [28], based on delta compression. on MECOM approach. To provide live migration for para-
HMDC approach has used active-push and on-demand virtualized VM’s, they used MECOM approach. Hence all
paging approaches for improving the VM memory trans- the above-mentioned work uses the compression tech-
fer rate. For reducing the network-bounded page faults, nique to achieve different performance metrics.
optimization methods is used that leads to improving the
application performance. In the first phase of VM migra- Generic steps of duplication based VM migration
tion (pre-copy migration phase) process, HMDC pushes The steps of duplication based VM migration is illus-
the VM memory pages parallel to the dirty pages in an trated in Fig. 8. Migration daemons or controller running
iterative manner. In the next phase, the bitmap list of in the Domain0 is responsible for performing migration
dirty pages is transferred to the destination server for syn- of running VM’s. In pre-copy phase, migration controller
chronization of VM’s. In the last phase, the resumed VM accesses the Shadow page tables existing in the hypervisor
access the dirty pages based on the bitmap list. To utilize layer to trace modified pages in migrated VM’s during
the network resources at source server, RLE-based delta the pre-copy phase. shadow paging can also be used to
compression used by HMDC. Nevertheless, migration is trap access to non-existent pages at the target VM. The
shadow page table entries reflect the changes in the dirty Dependency aware VM migration:
bitmap. At the initial of each pre-copy round, migration The inter-dependency information is used to find out
daemon sends the bitmap first. After that, the is cleared direct or indirect external-dependencies among processes
and destroyed. Later, the bitmap and shadow page tables during live VM migration. By transferring the dependent
are created for next round. Migration daemon selects the VM in groups will reduce the network traffic. Comparison
pages for migration based on dirty bitmap entries and of existing Dependency aware VM migration approaches
compression is performed to reduce network overheads. are presented in Table 10.
At the target machine migration, daemon accepts the The demand for live migration increases when resources
compressed data and decompression technique is applied are most scarce. So it is important that live migration
to access actual pages. Further, the VM memory page process be as fast and efficient; and provide dynamic
mapping is performed for migrated VM by the migration load balancing, automatic failover, and zero-downtime
daemon. scheduled maintenance during unscheduled downtime. A
dependency-aware live migration approach is proposed
Context aware VM migration by Nocentino et al. [108] and investigates its ability to
The migration decision of some of the memory pages reduce migration latency and overheads. It can lessen
depends on the content of pages. In the following sub- live migration overhead over non-live migration. The
section, we categorize the existing work into four cate- proposed approach used a tainting mechanism that was
gories like - basically developed for an intrusion detection mecha-
nism. The inter-dependency information is used to find
1. Dependency aware VM migration
out direct or indirect external-dependencies among pro-
2. Soft page aware VM migration
cesses. The development and test environment consists
3. Dirty page aware VM migration
of two Dell PowerEdge 1900 servers, each with two quad
4. Page fault aware VM migration
core Intel Xeon series 5355 2.66 GHz processors, 4GB of
The categories are based on the inter-dependency primary memory and a system bus speed of 1333 MHz.
among single or multi-VM pages, zero content mem- Both servers are configured with Xen 3.3.0 and use 32 bit
ory pages, the frequency of page dirty and network/page Ubuntu 8.0.4 LTS running an SMP kernel (2.6.18.8) for the
fault aware. server OS. The guest OS is para-virtualized 32 bit Ubuntu
8.0.4 LTS with Linux kernel 2.6.18.8, the VM has 2GB of proposed an optimized post-copy VM migration
main memory and 10GB hard disk. The outcomes show approaches such as demand paging, active push, pre-
that migrating VM process can be considerably stream- paging, and DSB to avoid network fault problem. Demand
lined by selectively applying efficient protocol which does paging approach transfers the memory page over a
not contains external-dependencies. network, only when the destination server request for
Babu and Savithramma [109] proposed an idea for that page. Active push approach pro-actively transfers
pre-copy migration of VM processes and also analyzed pages to the destination server based on temporal locality
process-level performance during migration. They find heuristic. Pre-paging pre-fetches the pages at destination
independent instruction sets at source server and transfer server based on VM page request pattern or working-set
them to destination server to resume VM without waiting of pages. This approach significantly reduces the page
for source server to transmit all the instructions. Authors fault through pre-fetching of VM pages to be accessed
present a novel algorithm that tracks memory update in the future. DSB reduces the network load by reduc-
pattern and stop migration process when improvements ing the page transfer rate. Using Ballooning approach,
in downtime are unlikely to occur. The implementation VM periodically releasing the free memory pages and
results show that it is beneficial for both Ethernet and hand over these memory pages back to the hypervisor.
RDMA/(InfiniBand) migration. Their work is performed Implementation of post-copy along with all of the opti-
on KVM 0.14.0 hypervisor and was able to minimize mization’s on para-virtualized Linux 2.6.18.8 and Xen
downtime and low impact on application performance. 3.2.1 virtualized environment. Post-copy improves several
A dependency-aware live migration approach is pro- metrics including total migration time, pages transferred,
posed by Nocentino et al. [108] and investigates its ability and network overheads. Nevertheless, the performance
to reduce migration latency and overheads. Whereas Babu of the proposed approaches is based on the accuracy of
and Savithramma [109] proposed an idea for pre-copy prediction heuristic like spatial locality. The extensive
migration of VM processes and also analyzed process- comparison of pre-copy and post-copy migration is per-
level performance during migration. formed by Hines and Gopalan [64] on the Xen hypervisor
using different workloads. They used post-copy with an
Soft page aware VM migration: adaptive pre-paging approach to avoid re-transmission of
Soft pages include free pages and kernel status objects, duplicate pages. Implementation of post-copy along with
which are already available on the destination server. By all of the optimization’s on para-virtualized Linux 2.6.18.8
avoiding the transfer of such pages leads to decreasing and Xen 3.2.1 virtualized environment. Results show that
the total migration time without influencing the hosted network-bound page faults reduced up to 21% of the VM’s
applications. Comparison of existing Soft page aware VM for large scale workloads and the transmission of free
migration approaches is presented in Table 11. memory pages using DSB mechanism are also avoided.
Post-copy approach provides a “win-win” by reducing Migration noise (resource consumption due to migra-
total migration time while maintaining the liveness of tion overhead at both source and destination servers)
VM during the migration process. During the post-copy makes the live migration process difficult to handle unpre-
VM migration, application performance considerably dictable increases in workload due to flash crowds and
degrades by network page faults, Hines et al. [63], also lower the total throughput of data centers. Sonic
migration approach proposed by Koto et al. [110], track The test migrations between an identical pair of server-
and avoid the transfer of soft pages (free pages and ker- class machines (Dell PE-2650 dual Xeon CPU’s 2GHz,
nel status objects) during the VM migration process. 2 GB RAM, Broadcom TG3 network interfaces) and are
Before triggering a VM migration, guest kernel informs connected through switched Gigabit Ethernet network.
the address of the soft pages to the hypervisor. Sonic XenLinux 2.4.27 as the OS is used in all cases. A theo-
migration creates a shared memory between hypervisor retical study shows that one VM, requesting and modify
and guest kernel so that they can communicate with little some memory pages more frequently than other. Their
intervention of CPU resource. Hypervisor generates a sig- dynamic network-bandwidth adaptation reducing ser-
nal for VM to update the shared memory before initiating vice downtime to below discernible thresholds and with
the stop-and-copy phase. After that, VM sends a hyper call minimal impact on running services during migration.
to the hypervisor that triggers the stop-and-copy phase. Pre-copy approach cap the number of copying iter-
The proposed approach decreasing the total migration ations to its maximum number of iterations since the
time without influence hosted applications. Implemen- writable working set is not guaranteed to converge across
tation is performed on Xen 4.1.0 hypervisor and Linux successive iterations, especially when VM is executing
2.6.38 OS show the migration time with the proposed pro- read-intensive workload. Fei Ma et al. [111] attempt to
totype is up to 68.3% shorter than that the Xen based improve pre-copy approach on Xen hypervisor. They
live migration and network traffic is reduced by up to used the bitmap approach in which they mark the fre-
83.9%. However, it generates extra overhead on memory quently updated pages and focus on cluster environment
and CPU resources, that affect of the applications. for migration. There is CPU and memory status is needed
During the post-copy VM migration, application per- to be transferred from source to destination server and
formance considerably degrades by network page faults, no need to transfer storage blocks because in cluster envi-
Hines et al. [63], proposed an optimized post-copy VM ronment network-accessible storage system like Storage
migration approaches such as demand paging, active Area Network (SAN) or NAS are used. The frequently
push, pre-paging, and DSB to avoid network fault prob- dirty pages are recorded in the bitmap in every iteration
lem. On the other hand, Sonic migration approach is process and are transmitted in the last round. It ensures
proposed by Koto et al. [110], that track and avoid the that frequently updated pages are transmitted only once.
transfer of soft pages (free pages and kernel status objects) Therefore it solves the problem of re-transmitting mem-
during the VM migration process. ory pages multiple times that leads to a reduction in total
migration time and transferred data. The test environ-
Dirty page aware VM migration: ment consists of two physical machines (2.66 GHz dual
During the migration process, some of the memory pages core Intel, 4 GB RAM, Ubuntu 8.04 as host OS and guest
are continuously updated by running VM. These dirty OS, Xen 3.3.0 hypervisor) connected via a Fast Ethernet
pages are resent to the destination host in a future itera- switch. Results show that improved pre-copy approach
tion, hence some of the frequent access memory pages are compared to pre-copy reduce the total transferred data by
sent several times. It causes long migration time. So avoid- 34% and total migration time by 32.5% on average.
ing the retransmission of frequently accessed pages leads For accurate prediction of migration performance, a
to reduced total migration time and amount of mem- model is proposed by Akoush et al. [70], which exam-
ory transfer. Comparison of existing dirty page aware VM ines the service interruptions for a particular workload.
migration approaches is presented in Table 12. Authors show network link capacity and memory
VM memory pages are dirtied at a specific rate, called dirty rate are the major factors that highly affect the
the dirtying rate, while a VM is running. If the dirty rate migration behavior. The predicted value of migration
is higher than page transferring rate, then the count of time must be accurate to handle dynamic and intel-
dirty pages re-transfer is increased in further iterations. ligent VM placement without affecting application
The current algorithm cannot complete the dirty page performance. Live VM migration behavior in pre-copy
transfer phase then the only solution involves VM to be migration technique is investigated in Xen hypervisor
prematurely suspended. This is not an appropriate solu- platform. The link capacity and page dirty rate highly
tion. If the memory pages that are left to re-transfer then impact migration performance in a non-linear man-
it causes a long downtime. If pre-copy migration pattern ner due to hard-stop conditions force migration in last
migrates the write-intensive application then application stop-and-copy phase. Authors also implement Average
performance degrades significantly during the migration page dirty rate (AVG) and History based page dirty
process. Clark et al. [33] proposed dynamic rate limit- rate (HIST) simulation models, used to predict the
ing method that reduces the application dirty rate for performance of pre-copy migration. Experiment is per-
prioritizing the migration process. Consequently, the formed on 3 servers (2 Intel(R) Xeon(TM) E5506 CPU’s
performance of running applications is badly impacted. 2.13 GHZ , 6 GB DDR3 RAM, dual Gigabit Ethernet,
Citrix Xenserver 5.5.0 (Xen 3.3.1), Ubuntu 2.6.27-7 Ibrahim et al. [112] show the behavior of iterative pre-
kernel). The results show that for high speed (10 Gbps) copy live migration for memory-intensive applications
network links, Xen migration architecture does work well. (HPC workloads). Without going in detailed knowledge of
Several optimization’s approaches increase the migration the application behavior, memory-intensive applications
throughput by 125.5% (from 3.2 Gbps to 7.12 Gbps). Both are difficult to migrate. The scientific application mem-
AVG and HIST models are more than 90% accurate with ory dirty rate is likely to be higher than the migration
respect to actual results. draining rate. Authors presented a novel online algorithm,
For commercial applications, KVM and Xen work which able to provide minimal impact on application
well but High Performance Computing (HPC) work- performance by controlling the migration based on the
loads require more CPU cycles during migration process speed of memory updating. The experiment is performed
that may not be fulfilled with current KVM rate con- on two (quad-core quad-socket UMA Intel Xeon E7310
trol and target downtime heuristics, which leads to ser- (Tigerton) 1.6 GHz, Linux kernel 2.6.32.8, KVM 0.14.0
vice degradation drastically. In case of HPC applications hypervisor) machines that are connected by dual Infini-
statically choose rate limits and downtime is infeasible. Band and Ehernet networks. The results show that the
algorithm achieves reduced downtime and low impact on bitmap. It helps to avoid re-transmission of frequently
performance. updating pages by transferring them in the last iteration.
When memory dirty rate is higher than pre-copy migra- The experiment is performed on three identical physical
tion rate than live migration will fail. During the process machines (Intel Core 2.93GHz dual processor, 4G RAM),
of pre-copy migration, application performance degrades connected via a Gigabit Ethernet and NFS is installed
if the memory dirty rate is higher than the network in one of them as the shared storage. Both of host OS
transfer capacity. To handle this issue, Jin et al. [113] pro- and guest OS are Ubuntu Server 11.10, and standard
posed an optimized pre-copy VM migration technique, KVM 0.14.0 and the modified KVM 0.14.0 with adding
for this vCPU frequency is changed to control memory CBP codes for contrast experiment. Implementation
dirty rate. The proposed technique adjust memory dirty results show that CBP algorithm achieves considerable
rate and control the vCPU frequency to the required ser- improvement in total migration time, service downtime,
vice downtime limit when memory dirty rate is high, for and total pages transferred compared with KVM’s default
avoiding application QoS degradation. Memory dirty rate migration algorithm.
becomes lower. The authors also analyzed that down- The pre-copy approach is highly used for minimizing
time varies with different bandwidth levels while varying both the total migration time and the service downtime.
memory dirty rate. So, this technique adversely affects But it is inefficient in the case of the high dirty rate
the application performance and can be used for some of memory pages and this will increase the total migra-
set of applications like gaming applications. Reducing the tion time. The high dirty rate problem is also pointed by
vCPU frequency, the game is not stopped but affect only Mohan and Shine [116]. In their method, they reduce the
visual objects. Experiments performed servers (4 Intel total migration time by sending the log records of modifi-
Xeon CPU’s 1.6 GHz, 4 GB DDR RAM, Linux 2.6.18, cations instead of re-sending the dirty pages or postpone
Xen3.1.0 hypervisor) connected by 1000 Mbps Ethernet the transmission of frequently dirty pages. It transfers
network. The migration barrier has been loosened up to least recently used memory pages till more than half iter-
4 times using the optimized algorithm. Also the migra- ations. The VM’s are hosted over a cluster of machines
tion of the same workload with and without optimization, (Intel i5 3.10 GHz, 1.88 GB RAM, Ubuntu 10.04 OS) con-
VM’s downtime (up to 88%) lower dramatically, with the nected via Ethernet network links. The model is designed
acceptable overhead. in such a way that the migration time and the service
For live migration, mainly pre-copy approach is used downtime are reduced.
in which the performance of VM is affected by total Live VM migration can be of two types - adaptive
migration time and considerable amount data is trans- method and non-adaptive method. These methods,
ferred during the migration process. Zaw and Thein requires a considerable amount of CPU and network
[114] presented a framework that extends the pre-copy resources during migration, that critically affect the VM
migration phase by including the pre-processing phase to performance. This issue requires a building of effective
reduce the data transfer amount. They proposed the pre- approach that considers both the performance of VM and
diction working set algorithm for pre-processing, which the resource needs during migration which can help to
combines Least Recent Used (LRU) cache and splay tree select the appropriate VM(s) for migration and also allo-
algorithm, which reduce a number of transferred memory cate the appropriate amount of resource for migration.
pages. Evaluation is performed on a cluster composed of Nathan et al. [29], investigates the cost-profit analysis
6 similar servers (two Intel Xeon E5520 quad core CPU for adaptive and non-adaptive VM migration, to avoid
2.2GHz, 8GB DDR RAM, Linux-2.6.18.8 OS, Citrix Xen- aggressive pre-copy termination. An adaptive approach
5.6.0 hypervisor). The implementation is performed on pro-actively adjusts the memory page transfer rate based
XEN platform, the proposed framework can reduce total on VM behavior, whereas non-adaptive approach trans-
data transferred up to 23.67% and total migration time on fers VM memory pages at maximum possible transfer
average by 11.45%, with respect to traditional pre-copy speed. They combine both the approaches and name it as
migration. Improved Live Migration technique (ILM), that reflects
The pre-copy migration performs well for the applications higher dirty rate and limited resources of the
lightweight memory VM’s but it cannot guarantee a server that concerns during VM migration process. ILM
desirable performance if memory dirty rate is high or optimizes the performance of un-managed VM migra-
if there is low network bandwidth. Re-sending of dirty tions by triggering stop-and-copy phase in certain con-
pages multiple times leads to a performance degradation ditions like (1) if number of iterative rounds reached up
issue. Yong et al. [115] presents Context Based Prediction to pre-defined threshold, (2) if VM memory is copied
algorithm (CBP) and makes use of Prediction by Partial three times, (3) if dirty rate becomes lower than a pre-
Match (PPM) model to predict the dirty pages in the later defined threshold, or (4) if the dirty rate in the previous
iteration based on the historical statistics of dirty page round was higher than predefined threshold. To optimize
the communication bandwidth, ILM approach eliminates reciprocal function. The experiment is conducted on two
the free memory pages when migration is under process Servers (Dell, Linux kernel 2.6.18.8-xen, Xen 3.4.3 hyper-
and also improves the VM migration time and service visor) connected by HP ProCurve 2910al Ethernet. The
downtime. Migration workloads performed using the five observed results show that the reciprocal-based model
physical machines (4 cores Intel i5 760 CPU 2.8 GHz, characterize the dirty page rate well and also provides
4GB RAM, Ubuntu Server 10.04 64-bit at both host OS bounded delay guarantee.
and Guest OS) and all the machines connected by 1 Gbps In live migration, VM(s) are kept continuously powered-
D-Link DGS-1008D switch. Three machines acted as a up but it is not the case in offline migration. Desai and
controller whereas other two machines installed with Xen Patel [118], proposed an approach by further modifying
4.0.1 hypervisor. The ILM technique reduces the network existing pre-copy algorithm which reduces migration time
traffic by 14-93% and migration time by 34-87% compared and downtime in both low-dirty page rate and high-dirty
to the vanilla live migration techniques. page rate. Further, by compressing whole data with Char-
In pre-copy approach memory pages are transfer num- acteristic Based Compression (CBC) algorithm reduces
ber of time that increases total migration time and net- both the downtime and migration time. Experiments are
work traffic whereas in post-copy approach leads to a lot performed on CloudSim simulator. Proposed algorithm
of page fault and high service downtime. A Three-Phase reduces migration time in both high-dirty page rate and
Memory (TPM) transfer approach proposed by Yin et al. low-dirty page rate.
[53], determine that the memory pages are transferred Clark et al. [33] proposed dynamic rate limiting method
at most twice during the whole migration process. This that reduces the application dirty rate for prioritizing
approach ensures that memory page fault occurs only for the migration process. Fei Ma et al. [111] attempt to
fraction of memory that leads to lessening total migra- improve pre-copy approach on Xen hypervisor by avoid-
tion time. The TPM transfer having full memory copy, ing re-transmission of memory pages multiple times. For
dirty bitmap, and dirty page moving phases for entire accurate prediction of migration performance, a model is
VM memory migration. In the full memory copy phase, proposed by Akoush et al. [70], which examines the ser-
transfer all the VM memory pages from source to des- vice interruptions for a particular workload. Ibrahim et al.
tination server are transferred without interrupting the [112] show the behavior of iterative pre-copy live migra-
running applications even pages are continuously mod- tion for memory-intensive applications (HPC workloads)
ified. In the next dirty bitmap copy phase, the VM at because HPC applications statically choose rate limits and
source server is suspended and then all the recorded dirty downtime which is infeasible. During the process of pre-
memory pages are transferred to the destination server. copy migration, application performance degrades if the
In the last dirty pages copy phase, the VM at the des- memory dirty rate is higher than the network transfer
tination server is resumed. Active push and on-demand capacity. To handle this issue, Jin et al. [113] proposed
paging approaches are used to fetch faulty pages from an optimized pre-copy VM migration technique, for this
the source server. Implementation is performed on Xen vCPU frequency is changed to control memory dirty
4.1.4 hypervisor and evaluation is performed under var- rate. Zaw and Thein [114] presented a framework that
ious memory-intensive workloads. Obtain results show extend the pre-copy migration phase by including the
that TPM approach can considerably reduce total pages pre-processing phase to reduce the data transfer amount.
transferred and total migration time. This work is effective Further re-sending of dirty pages multiple times leads to a
for automatic load balancing. performance degradation issue. For this Yong et al. [115]
Few of existing live migration techniques can be applied presents CBP algorithm and makes use of PPM model
to the delay-sensitive web services applications or a VM to predict the dirty pages in the later iteration based on
backup process that needs to be done in a specific time. the historical statistics of dirty page bitmap. The high
Pre-copy migration technique requires frequently varied dirty rate problem is also pointed by Mohan and Shine
transfer bandwidth, which is a critical problem for net- [116]. In their method they reduce the total migration
work operators. The accurate prediction of migration time time by sending the log records of modifications instead
is also not possible. Zang et al. [117] theoretically analyze of re-sending the dirty pages or postpone the transmission
appropriate bandwidth that guarantees the total migra- of frequently dirty pages. Nathan et al. [29], investigates
tion time and service downtime. Authors first assume the the cost-profit analysis for adaptive and non-adaptive
dirty distribution of VM memory pages that follow the VM migration, to avoid aggressive pre-copy termination.
deterministic distribution. Then the bandwidth is deter- A Three-Phase Memory (TPM) transfer approach pro-
mined under the condition that dirty distribution obeys posed by Yin et al. [53], determine that the memory pages
the bernoulli distribution. Authors assumed that the dirty are transferred at most twice during the whole migra-
frequency of each page is varied and the Cumulative Dis- tion process. The accurate prediction of migration time
tribution Function (CDF) of the dirty page frequency is a is also not possible. Zang et al. [117] theoretically analyze
appropriate bandwidth that guarantees the total migration and then the process is suspended at the source side
time and service downtime. Desai and Patel [118], pro- and resumed at the destination side. The main advan-
posed an approach by further modifying existing pre-copy tage of their work is that the amount of trace data is
algorithm which reduces migration time and downtime in smaller than the amount of data transferred in tradi-
both low-dirty page rate and high-dirty page rate. Hence tional pre-copy migration. But, this approach does not
all the above works use different approaches to solve the perform well in multi-core environments. Experiments
migration problem. are performed on similar physical machines (AMD Athlon
3500+ processor, 1GB DDR RAM, a modified version of
Page fault aware VM migration: Linux 2.4.20 as host OS, RHEL AS3 Linux kernel 2.4.18 as
In the post-copy migration, if any kind of failure occurs guest OS) and to transfer the VM images Intel Pro/1000
during the migration then recovery may not be possible. Gbit/s NIC is used. Results show that CR/TR-Motion can
By using the check pointing, recovery & trace, and replay drastically reduce migration overheads compared with
techniques to enable fast and transparent VM migration. the pre-copy algorithm: service downtime, total migra-
Comparison of existing Page fault aware VM migration tion time, data to synchronize the VM state, application
approaches is illustrated in in Table 13. performance overhead up to 72.4%, 31.5%, 95.9%, 8.54%
In the literature, existing migration approaches mainly respectively.
focus on transferring the VM run-time state by using Further, Liu and Fan [120] proposed a hybrid technique
the pre-copy approach. The pre-copy approach synchro- for recovering the system using check pointing, recovery
nizes the VM states at both source and destination & trace, and replay with CPU scheduling. The execu-
sides that increase the network traffic, application down- tion log files of source VM are copied but dirty pages
time, and migration cost especially for memory-intensive are not copied in this approach. It reduces the amount
workloads. Storing the state of data movement traces of of transferred data. This algorithm also reduces down-
non-deterministic events in the log file is called Check time by CPU scheduling. During the migration process,
pointing. It is used at a later time for re-execution of the check pointing logs are transferred in the first round, and
past process or failed process. So it is helpful for proac- log files are transferred in the iterative round, so the log
tive failure and debugging. Liu et al. [119] implemented of previous round will be sent in next round. Experi-
CR/TR-Motion, this approach reduces total migration ments are performed on two identical physical machines
time, service downtime and network bandwidth con- (Intel Atom CPU D410 processor, 2 GB DDR RAM, Cen-
sumption. They used the check pointing, recovery & trace, tOS 5.5 kernel 2.6.18 as host OS and guest OS, Xen
and replay techniques to enable fast and transparent VM 3.0.1 as a hypervisor) and connected by 1000 Mb/s Eth-
migration. The discrete event occurring in the system is ernet network. Results show that proposed hybrid tech-
monitored by trace daemon and generate the log files. In nique compared with a pre-copy algorithm can reduce
the first step, the checkpoint is transferred to the desti- total migration time and service downtime: up to 43.84%,
nation server. At source side, it iteratively generates the 62.12% respectively.
logs and transfers them to the destination server. This It is still a challenging issue to dynamically optimize
process is continued up to the complete transfer of logs, VM packing on the host, due to frequently changing
resource demands. A light weight KVM hypervisor exten- Check pointing is used at a later time for re-execution
sion proposed by Hirofuchi et al. [121] for migrations of of the past process or failed process. So it is helpful
VM’s within LAN network. The KVM extension has an for proactive failure and debugging. for such scenario,
additional driver in guest-OS that seamlessly processes Liu et al. [119] implement CR/TR-Motion, reduce total
on-demand memory access query generated by VM. migration time, service downtime and network bandwidth
The CPU state and device states are transferred to the consumption. They used the check pointing, recovery &
destination server using QEMU before moving the VM trace, and replay techniques to enable fast and transpar-
memory content for continuous progress of VM. Based ent VM migration. Further, Liu and Fan [120] proposed
on the migrated VM CPU states and device states, the a hybrid technique for recovering the system using check
destination server uses a QEMU hypervisor to resume pointing, recovery & trace, and replay with CPU schedul-
migrant VM, based on received states. For servicing the ing. A light weight KVM hypervisor extension proposed
application’s I/O request, a kernel triggers the page fault by Hirofuchi et al. [121] for migrations of VM’s within
handler of Virtual Memory (VMEM) driver (which works LAN network. Therefore, live VM migration problem is
as a page fault handler) send a request for transferring solved using different approaches.
the faulty pages to the destination server, only if pages are
not available. Also, QEMU initiates background threads Generic steps of context aware VM migration
for actively pushing pages to the destination server. As a Basic steps of context aware VM migration is illustrated
result, virtual memory based migration provides the free- in Fig. 9. It uses the pseudo-paging approach, swap
dom to work independently without demanding another out all pageable memory in the VM to an in-memory
driver for handling migrant VM. The implementation pseudo-paging memory within the guest kernel. At source
is performed on two (source and destination) physical machine, page fault detection and servicing are imple-
machines (2-core processor, 4 GB RAM, KVM (KVM- mented through the use of two loadable kernel modules,
84/88 and qemu-kvm-0.11) hypervisor) connected by two one is inside Dom0 and other is inside migrating VM.
GbE network segments. SPECweb2005 benchmark [122] These modules use MemX [124] system that provides
is used for performance measurements for web servers. transparent remote memory access at the kernel level
The experimental results show that a heavily-loaded VM for both Xen VM’s and native Linux systems. When the
is successfully migrated to destination server within 1 s. migration is started, the migrating VM memory pages
The proposed mechanism fastly move VM state (includ- are swapped out to a pseudo-paging memory which is
ing memory pages) to the destination server compared exposed by the MemX module in the VM. The swap-
to the pre-copy approach. It also reduces the number of ping is done using either one of Machine Frame Number
pages transferred by effectively using available network (MFN) exchange mechanism, which transfers the owner-
bandwidth. Another work focus that after migration, a ship of the pages to a co-located VM or remapping the
guest OS fails to boot-up, while loading device drivers or pseudo-physical address of the VM pages with zero copy-
device configuration adjustment. To solve these problems ing overhead. The latter one is more effective because it
Ashino et al. [123] presented Estimation of Directions of generates fewer calls into the hypervisor which lead to
Arrival by Matching Pursuit (EDAMP) method for VM a lower overhead. After the swapping process at source
migration between heterogeneous hypervisor implemen- machine, the pages are mapped to Dom0. During the
tation. It only modifies the files and does not destroy the service downtime, CPU state and non-pageable memory
device driver. are transferred to the target machine. The non-pageable
memory transfer overhead can be considerably reduced encryption and decryption of data. The following steps
via the any of the hybrid approach. must be considered when migration is initiated, both at
the source and destination server:
Threats in live virtual machine migration
Live migration is quite a new idea and its security aspects 1. The person who initiates migration should be
are not fully discovered. authenticated.
The popularity of cloud computing caught the atten- 2. Security among various entities must be preserved at
tion of many hackers, allowing them to find new ways every step.
to attack either cloud services or customer’s data. These 3. Entire migration information should be kept
attacks may range from Denial-of-Service (DoS) attacks confidential.
to Man-In-The-Middle (MITM) attacks. These kind of
threats in live VM migration discourages many sectors, Security concern in VM migration
such as financial, medical, and government, from taking Live VM migration leads to number of security threats in
advantage of VM live migration. Hence, this is one of the CDC’s that maybe directed at hypervisors like KVM, Xen
critical factors that needs examination, while VM migra- and VMware. Hypervisors are not able to secure sensi-
tion is being considered. There are various active and tive information during migration and are vulnerable for
passive attacks possible while migration is under process attack. Attacker gets complete control of hosted VM and
and some of them are discussed below: on VMM.
For the secure VM migration, much research has been
1. Bandwidth stealing: Attacker may steal the network accomplished with a focus on offline migration. However
bandwidth by taking the control over source VM and live VM migration still needs to be actively investigated.
migrate it to the destination. Live VM migration suffers with many vulnerabilities
2. Falsely advertising: The attacker may advertise false and threatswhich can be easily explored by the attackers.
information of resources over network and try to Anala et al. [125], Jon et al. [126], Sulaiman and Masuda
attract others for migrating their VM’s towards [127] demonstrated live migration threats. Basedon their
attacker side. demonstrations, live migration attacks can be targeting
3. Passive snooping: Attacker tries to get one of these three different classes: (1) control plane (2)
unauthorized access of data. data plane and (3) migrationmodule. This is illustrated in
4. Active manipulation: Attacker tries to modify the Fig. 10.
data which is travelling from one server to another
[125]. Control plane
Migration process at both source and destination side are
To detect and prevent such attacks, there are sev- handled by system administrator who is having all the
eral cryptographic algorithms available that are used for controls and authority to perform secure VM migration
operations (e.g. creating new VM, migrating VM, ter- data are transferred as clear data without any encryp-
minate a running VM, defining the VM’s setting, etc). tion. Hence, an attacker may place himself in the trans-
This will prevent spoofing and replay attacks. No other mission channel to perform a man-in-the-middle attack
user can do the migration process if the access control using any of the techniques: Address Resolution Protocol
of the administrato’s interface is secure. The mechanism (ARP) spoofing, Domain Name System (DNS) poisoning,
of communication used by the hypervisor should also be or route hijacking [126]. Man-in-the-middle attack can be
authenticated and must be resistant against any tamper- one of the two types of attacks - passive and active:
ing [126]. A lack of security in the control plane may
allow an attacker to exploit live migration operation in 1. Passive attack: Attacker observes the transmission
different ways: channel and other network streams used to get the
information of migrating VM. The attacker gains
1. Denial-of-Service (DoS) attack: Attacker will information from the VM’s migrating memory (e.g.,
create many VM’s on the host OS just to overload passwords, keys, application data, capturing packets
the host OS, which will not be able to accept any that are already authenticated, messages that have
more migrated VM’s. sensitive data will be overheard, etc.) [125].
2. Unnecessary migration of VM: Attacker will 2. Active attack: This attack is the most serious attack
overload the host OS by unneeded VM’s. This will in which the attacker manipulates the memory
force execution of the dynamic load balancing contents (e.g., authentication service and pluggable
feature, which will ensure migration of some VMs to authentication module in live migration)of migrating
balance the load. VM’s [128].
3. Incoming Migration Control: The attacker can
initiate an unauthorized migration request, so VM Migration module
can be migrated from secure source physical machine Migration module is a software component in the
to a compromised attacker machine. This may result VMM that allows live migration of VM’s.A guest OS
in attacker getting full control on the legitimate VM. can communicate with the host system and vice versa.
4. Outgoing Migration Control: The attacker can Moreover,the host system has full control over all VM’s
initiate the VM migration and can make the overuse running over its VMM. If the attacker is able to com-
of the cloud resources which can lead to failure of the promise the VMM via its migration module, then the
VM. integrity of all guest VM’s that are running above this
5. Disrupt the regular operations of the VM: An VMM will be affected.Any VM in the future that will
attacker may migrate a VM from one host to another migrate to the affected VMM will also be compromised.
host without any goal except to interrupt the VM with a low security level is exploited using the attack
operations of the VM. techniques in the migration module. When an attacker
6. Attack on VMM and VM: Attacker will migrate a discovers a VM with a low security level during the
VM that has a malicious code to a host server that migration process,they will attempt to compromise it and
has the target VM.This code will exchange can do it easily. They can use it as a gate to compromise
information with the VMM and the target VM other VM’s on the same host with higher levels of security
through a covert-channel. This channel will [129]. Moreover, the attacker will be able to attack the
compromise the confidentiality of the host server by VMM itself, after identifying a way to enter the system.
leaking target VMs’ information.
7. Advertising for false resource: Attacker advertises Security requirement in VM migration
false resource availability for the target VM. For There are security requirements that must be imple-
example, advertising that there is a large number of mented in the live VM migration, which will enhance the
unused CPU cycles. This results in migration of the security level in the previous classes to protect both VMs
VM’s to a compromised hypervisor. and host servers from any attack - before,during, and after
the live migration process. Aiash et al [130] and John et al
Data plane [126] discussed security requirements in live VM migra-
Several contents (e.g., kernel states and application data) tion. Following are the security requirements that should
of memory are transferred from source to destination be implemented in VM live migration: (1)defining access
server in the data plane. It is possible that the attacker control policies, (2) authentication between sender source
can passively snoop and steal or actively modify confi- server and the destination server, (3) non-repudiation
dential information. Thus, the transmission channel must by source and destination server, (4)data confidentiality
be secured and protected against various active and pas- while migrating a VM, (5) data confidentiality before and
sive attacks. In the VM migration protocol, all migrated after migration, and (6) data integrity and availability.
Security requirements to mitigate attacks in the Control 3. Secure VM-vTPM (Virtual Trusted Platform
Plane and the Data Plane Module) Migration protocol: The protocol
1. Defining access control policies: By defining includes various steps like authentication, attestation
control policies on the control plane, VM’s and the and then different stages of data transfer. At the first
host server will be protected from unauthorized step both the parties authenticate for further
users. If attackers can compromise the interface communication. The source VM start transferring to
console, they might perform unauthorized activities the destination only after verification of integrity.
such as migrating a VM from one host to a legitimate The migrating VM files are stopped by the vTPM
target VMM [130]. where the files are encrypted and then transferred to
2. Authentication between source and destination the destination. After transferring all the files of a
server: Implement strong procedures of VM, vTPM is deleted.
authentication and identification in order to prevent 4. Improved vTPM Migration protocol: The protocol
unauthorized users from entering administrators’ is an is improved version of vTPM that consists of
interface. trust component also. It first performs
3. Data integrity and availability: This requirement authentication, integrity verification as performed in
will stop some attacks,such as a denial-of-service vTPM. After that the source and the destination
attack, which causes unavailability of either the server negotiate keys using Diffie-Hellman key
source host or the receiver host. This can be done by exchange algorithm. The migrating VM files are
applying strict policies for accessing control. protected with keys and encryption methods that
4. Data confidentiality during migrating the VM: In enable the secure transfer of VM files.
order to prevent a man-in-the-middle attack from 5. SSH (Secure Shell) Tunnel: It is established
getting any sensitive information, all data during between source and destination proxy servers for
migration must be encrypted secure migration that hide the details of the source
and destination VM’s [131].
Security requirements to mitigate attacks on the Migration
Module
1. Authentication between source and destination
Research challenges
Migration must be seamless to provide the continuous
server: A strong authentication mechanism must be
services. Live migration moves the VM without discon-
used between source and destination server. Firewall
necting with the client. Performance of live VM migration
can also be used for more security options [130].
must be very high for continuous services. Current tech-
2. Non-repudiation by source or destination server:
niques face many challenges while migrating memory and
The source and destination server must observe the
data intensive applications, like - network faults, con-
system’s activities and record all the migration
sumption of bandwidth and cloud resources, overloaded
activities [130].
VM’s. Common challenges that hamper live migration are:
3. Data confidentiality before and after migration:
transfer rate problem, page re-send problem, missing page
Data should be encrypted at both source and
problem, migration over WAN network, migration of VM
destination servers.Whenever the attack happens at
with the larger application, resources availability problem,
either guests VM’s data or the host’s data, then the
and address-wrapping problem.
original information not be affected.
4. Data integrity and availability: The virtualization
Transfer rate
software must be updated so that it can be protect
During the iterative phase of pre-copy live VM migration,
from vulnerabilities like heap overflow and stack
the VM’s pages are sent over the network between cor-
overflow [130].
responding servers. As the source VM is running during
Existing solutions for providing security in VM migration this process, its memory contents are constantly updated.
1. Isolating Migration Network: The Virtual LAN Because memory bandwidth is higher than network band-
(VLAN) that contains source and destination servers width, there is a high risk of memory pages being dirtied at
is isolated from other migration traffic over the a faster rate than they can be transferred over the network.
network. This reduces the risk of exposure of As a result, these dirty pages are transferred repeatedly
migration information to the whole network. while the amount of remaining dirty pages transfer does
2. Network Security Engine Hypervisor: It extends not decrease. This means that the migration process gets
the firewall and IDS/IPS functionality at hypervisor stuck in the iterative phase and as a result, the migration
level, which secures the migration from external may have to be forced into the stop-and-copy phase with
attack and raise an alarm when intrusion is detected a large number of dirty pages remaining to transfer. As
over network. the VM is suspended during the stop-and-copy phase, this
leads to extended migration downtime and a prolonged problem also imposes a high risk of performance degra-
total migration time. Even in less severe cases, where the dation for the hosted applications after the VM execution
algorithm does not need to be forced to proceed stop-and- has switched to the destination server. If the performance
copy phase, total migration time and service downtime are degradation is severe, the transparency and continuous
still extended to some degree. service objectives may not be met. The missing page prob-
The transfer rate problem poses a high risk to the lem also imposes a loss of robustness as it is not possible
continuous service operation, as an extended migration to fall-back to the source VM if the live migration fails, e.g.
downtime can lead to interruption of services and pos- due to network disconnects that occur before the entire
sibly disconnection of clients, lost database connections, RAM content has been transferred. As the destination
or other issues. Even if the migration downtime is short VM is not restarted until all memory pages are present in
enough for network connections not to drop (typically pre-copy methods, such algorithms are not affected by the
a few seconds for TCP connections over LANs or the missing page problem.
internet), timing errors, missed triggers, etc, might occur
and decrease the application’s stability and performance. Migration over WAN network
Svärd et al. [105] shown experimental results that live The existing VM migration techniques cannot deal effi-
migration of enterprise applications, downtimes as low as ciently with VM migration over a WAN where the source
one second caused unrecoverable application problems. and the destination servers are part of different networks
[95]. Live VM migration across WAN network is big
Page Re-send challenge as:
Live migration of a VM requires significant CPU and
memory resources, although the heaviest load is put on 1. Migrating network and storage connections: TCP
the network. As a VM can easily have several gigabytes of connection survives VM migration and its application
RAM, a large amount of data is transferred during the live without disruption in network connections if the
migration process. This problem is amplified in pre-copy source and destination servers are on the same
migration as the source VM is running during the itera- sub-net. Otherwise, migration process also deals with
tive phase and pages that have already been transferred breaks when migration occurs across sub-nets.
are often being dirtied again. Since the state of the des- 2. Migrating storage content: migration of large size
tination VM once resumed must be an exact copy of the virtual disk over WAN takes a long time. Hence the
source VM’s state, these pages must be re-sent. volume of data transferred over the WAN is also
The page re-send problem was first discussed by Clark critical.
et al. [33] and can lead to excessive resource consump- 3. Persistent state remains at the source side: The
tion, as only the final version of a page is used and re-located VM accesses the earlier centralized
re-sending pages during migration consume both network storage repository, over the WAN. Nevertheless,
and CPU resources. Furthermore, the page re-send prob- network latencies and considerable bandwidth usage
lem is a challenge to the predictability criteria as it is not result in poor I/O performance.
known beforehand the total number of pages that are to
be re-transferred, making it difficult to estimate how long Migration of VM running larger applications
migration takes to complete. There are many challenges in current migration tech-
Svärd et al. [105] present that pre-copy migration is nology; the big issue appears when they are applied on
affected by both the page-resend and transfer rate prob- large size workload application systems such as SAP ERP.
lems. These problems are related as the transfer rate is a These applications consume a huge amount of memory
cause of page-resend. However, factors like memory size, and storage capacity, that cannot be transferred seam-
page dirtying rate, and memory write patterns also affect lessly because they generate service interruption. So the
the number of pages resends. limitations with larger applications are dis-connectivity of
service, interruption of service, difficulty to maintain con-
Missing Pages sistency & transparency, and unpredictability & rigidity in
Post copy live migration algorithms resume the destina- VM loads [104].
tion VM prior to completely transfering memory contents
to the destination server. Then, the execution is switched Resources availability
to the destination side, the missing pages or faulty pages Resource availability is most important when a VM is
are transmitted from the source over the network. Due migrated. Live VM migration consumes CPU cycles and
to the low bandwidth availability and higher migration I/O bandwidth between corresponding servers. If there
latency, there is a performance penalty associated with is need of some CPU operation and it is not available
accessing faulty memory pages. The residual dependency then migration time would increase. Hence, if there is no
necessary resource available then migration couldn’t be are practical implementation. The live VM migration
completed. Availability of resource affect the performance frameworks are further divided into three sub-categories
of the migration and total migration time. It can also like the type of migration, duplication based VM migra-
help to make a better decision, such as when to migrate tion, and context aware VM migration. These categories
VM and how to deal with server resource allocation are based on the (i) single or multiple VM migration,
servers [69]. (ii) replication, duplication, redundancy and compression
VM/VM’s memory pages, and (iii) dependency among
Address warping VM’s, soft page, dirty page (dirty page rate) and page fault
The address-warping problem is also one of the critical due to network of VM pages. The existing approaches
issues while dealing migration process at WAN level. The of all the above sub-categories are compared based on
address of the VM warps from the source to the destina- performance metrics. Threats in live VM migration are
tion server which complicates the status of the connected discussed and categorize the possible attacks in three cat-
LANs and the WAN networks. Therefore it is difficult to egories (control plane, data plane and migration module)
move real time application running on VM such as online based on the type of attack. Finally we mention some
games or conferences. It may cause long downtime, so of the critical research challenges which require further
downtime and complexity can be avoided [132]. research for improving the migration process and effi-
ciency of CDC’s.
Live migration for high-speed LAN In our future work, we will propose a novel approach
The existing migration techniques assumed the network which would be able to reduce service downtime and total
bandwidth is 1 Gbps. But in the large CDCs, servers are migration time. We will also optimize the migration tech-
connected with high speed links like - 10 Gbps and 40 nique in the hypervisor to improve the performance of the
Gbps. So the transfer rate is higher and transmitted more live VM migration.
data during the migration period, which implies that the
trade-off between CPU utilization and network utilization Abbreviations
is different from 1 Gbps speed [42]. Therefore, exploring ARP: Address resolution protocol; AVG: Average page dirty rate; BFD: Best fit
decreasing; CBP: Context based prediction; CDC: Cloud data centers; CIC:
the migration techniques for high speed LAN can further Composed image cloning; DoS: Denial-of-service; DRBD: Distributed replicated
optimize the CDC performance, as well as the downtime block device; DNS: Domain name system; DSB: Dynamic self-ballooning; DVFS:
is reduced. Dynamic voltage frequency scaling; ERP: Enterprise resource planning; EDAMP:
Estimation of directions of arrival by matching pursuit; HPC: High performance
The other research challenges in Live VM migration computing; HIST: History based page dirty rate; HMDC: Hybrid memory data
are Network fault [133], memory intensive application copy; IP: Internet protocol; IRLM: Inter-rack live migration; KVM: Kernel-based
[133], memory state between clusters [133], Live migra- virtual machine; LRU: Least recently used; LZO: Lempel–ziv–oberhumer; LAN:
Local-area network; MFN: Machine frame number; MITM: Man-in-the-middle;
tion of nested VMM [42], Live migration of VM attached MMU: Memory management unit; MECOM: Memory compression; MBFD:
to pass-through accelerators [42] pointed by authors. Modified best fit decreasing; NAS: Network attached storage; NFS: Network file
system; OS: Operating system; PPM: Prediction by partial match; QoS: Quality
of service; QEMU: Quick emulator; RDMA: Random direct memory access; RLE:
Conclusion and future work Run length encoding; SSH: Secure shell; SLA: Service level agreements; SAN:
Live VM migration is the process of moving a running Storage area network; SAP: Systems, applications and products; TPM:
VM or multiple VM’s from one server to another. The Three-phase migration; TCP: Transmission control protocol; VM: Virtual
machine; VMEM: Virtual memory; VMM: Virtual machine monitor; VLAN: Virtual
services running on VM’s must be available all the time local-area network; TPM: Virtual trusted platform module; WAN: Wide-area
to the users, hence they must be migrated while they are networks; XBZRLE: Xor binary zero run length encoding
continuously running. This is possible only if VM’s are
migrated with zero downtime. The motivation behind live Acknowledgements
We are thankful to anonymous reviewers for their valuable feedback and
VM migration is - load balancing, proactive fault toler- comments for improving the quality of the manuscript.
ance, power management, resource sharing, and online
system maintenance. We identify the types of contents Funding
Not applicable.
that need to be migrated during migration which are CPU
state, memory content, and storage content. We discuss Availability of data and materials
pre-copy, post-copy and hybrid techniques of VM migra- Not applicable.
tion and present basic steps used in the migration process.
We mention the important performance metrics which Authors’ contributions
AC, DK and ESP surveyed the literature exhaustively. They prepared the logistic
affects the migration overheads. of each paper – brief write up on the contribution of each paper, tools used,
The comprehensive survey of state-of-the-art Live VM techniques used, and summary of results. MCG and LKA conceived the need
migration approaches are divided into two broad cate- for the study and designed the outline of sections. They also guided choosing
of parameters for evaluating various techniques. They have validated the
gories. We first discuss the models - which are theo- tables giving the comparison. GS and ESP participated in design and
retical phases. Then we discuss the frameworks - which coordination of all sections and helped to draft the manuscript. AC and DK
wrote most of the content and did the analysis. MCG and LKA approved the 16. Bugnion E, Devine S, Rosenblum M, Sugerman J, Wang EY (2012)
final version to be submitted. Bringing Virtualization to the x86 Architecture with the Original VMware
Workstation. ACM Trans Comput Syst ACM Ref Format Bugnion
Competing interests 30(4):1–51
Anita Choudhary, Mahesh Chandra Govil, Girdhari Singh, Lalit K. Awasthi, 17. Desai A (2012) Managing Virtualization with System Center Virtual
Emmanuel S. Pilli, and Divya Kapil declare that they have no competing Machine Manager. http://anildesai.net/index.php/2007/12/managing-
interest. virtualization-with-system-center-virtual-machine-manager/. Accessed
07 Sept 2017
18. vSphere ESXi Bare-Metal Hypervisor. http://www.vmware.com/
Publisher’s Note
products/esxi-and-esx.html. Accessed 04 Nov 2016
Springer Nature remains neutral with regard to jurisdictional claims in
published maps and institutional affiliations. 19. KVM. http://www.linux-kvm.org/page/Main_Page. Accessed 04 Nov
2016
20. Kivity Qumranet A, Qumranet YK, Qumranet DL, Qumranet UL, Liguori A
Author details (2007) Kvm: the Linux Virtual Machine Monitor. In: Proceedings of the
1 Malaviya National Institute of Technology Jaipur, Jaipur, India. 2 National
Ottawa Linux Symposium, Ontario, Canada. pp 225–230
Institute of Technology Sikkim, Ravangla, India. 3 Dr. B. R. Ambedkar National
21. Hypervisor x86 & ARM. https://www.xenproject.org/developers/teams/
Institute of Technology, Jalandhar, India. 4 Graphic Era Hill University,
hypervisor.html. Accessed 04 Nov 2016
Dehradun, India.
22. Microsoft Virtual PC. http://microsoft_virtual_pc.en.downloadastro.
com/. Accessed 04 Nov 2016
Received: 28 June 2017 Accepted: 25 September 2017
23. Microsoft Hyper-V Server 2016. https://technet.microsoft.com/en-us/
hyper-v-server-docs/hyper-v-server-2016. Accessed 04 Nov 2016
24. Oracle VM VirtualBox. https://www.virtualbox.org/. Accessed 04 Nov
References 2016
1. Mell P, Grance T (2011) The NIST Definition of Cloud Computing 25. Parallels Desktop (for Mac) - Parallels Desktop 11 for Mac. http://in.
Recommendations of the National Institute of Standards and pcmag.com/parallels-desktop-10/46064/review/parallels-desktop-for-
Technology. Technical report. doi:10.1136/emj.2010.096966 arxiv: mac. Accessed 17 Jan 2017
2305-0543 26. Medina V, García JM (2014) A survey of migration mechanisms of virtual
2. Choosing an App Engine Environment | App Engine Documentation | machines. ACM Comput Surv 46(3):1–33
Google Cloud Platform. https://cloud.google.com/appengine/docs/the- 27. Ferreto TC, Netto MAS, Calheiros RN, De Rose CAF (2011) Server
appengine-environments. Accessed 04 Nov 2016 consolidation with migration control for virtualized data centers. Future
3. Intro to Microsoft Azure | Microsoft Azure. https://azure.microsoft.com/ Gen Comput Syst 27(8):1027–1034
en-in/documentation/articles/fundamentals-introduction-to-azure/.
28. Hu L, Zhao J, Xu G, Ding Y, Chu J (2013) HMDC: Live virtual machine
Accessed 04 Nov 2016
migration based on hybrid memory copy and delta compression. Appl
4. Elastic Compute Cloud (EC2) Cloud Server & Hosting – AWS. https://aws.
Math Inf Sci 7(2 L):639–646
amazon.com/ec2/. Accessed 04 Nov 2016
29. Nathan S, Kulkarni P, Bellur U (2013) Resource Availability Based
5. IBM - Cloud Computing for Builders & Innovators. http://www.ibm.com/
Performance Benchmarking of Virtual Machine Migrations. In:
cloud-computing/. Accessed 04 Nov 2016
Proceedings of the ACM/SPEC International Conference on Performance
6. Buyya R, Buyya R, Yeo CS, Yeo CS, Venugopal S, Venugopal S, Broberg J,
Engineering. ACM, Prague, Czech Republic. pp 387–398
Broberg J, Brandic I, Brandic I (2009) Cloud computing and emerging IT
30. Åsberg M, Forsberg N, Nolte T, Kato S (2011) Towards real-time
platforms: Vision, hype, and reality for delivering computing as the 5th
scheduling of virtual machines without kernel modifications. In: IEEE
utility. Futur Gener Comput Syst 25(6):599–616
International Conference on Emerging Technologies and Factory
7. Uddin M, Shah A, Alsaqour R, Memon J, Saqour RAHASRAHA, Memon J
Automation, ETFA. IEEE, Toulouse
(2013) Measuring efficiency of tier level data centers to implement 31. Habib I (2008) Virtualization with KVM. Linux J 2008(166)
green energy efficient data centers. Middle East J Sci Res 15(2):200–207 32. Xu F, Liu F, Jin H, Vasilakos AV (2014) Managing performance overhead
8. Beloglazov A, Buyya R (2010) Energy Efficient Resource Management in of virtual machines in cloud computing: A survey, state of the art, and
Virtualized Cloud Data Centers. In: 10th IEEE/ACM International future directions. Proc IEEE 102(1):11–31
Conference on Cluster, Cloud and Grid Computing. IEEE, United States. 33. Clark C, Fraser K, Hand S, Hansen JG, Jul E, Limpach C, Pratt I, Warfield A
pp 826–831 (2005) Live migration of virtual machines. In: Proceedings of the 2nd
9. Zhou M, Zhang R, Zeng D, Qian W (2010) Services in the Cloud Conference on Symposium on Networked Systems Design &
Computing era: A survey. In: 4th International Universal Communication Implementation - Volume 2. USENIX Association, Berkeley. pp 273–286
Symposium. IEEE, Beijing. pp 40–46 34. Deshpande U, Kulkarni U, Gopalan K (2012) Inter-rack live migration of
10. Storage Servers. https://storageservers.wordpress.com/. Accessed 07 multiple virtual machines. In: Proceedings of the 6th International
Sept 2017 Workshop on Virtualization Technologies in Distributed Computing
11. Koomey JG (2011) Growth in Data Center Electricity use 2005 to 2010. Date. ACM, Delft, The Netherlands. pp 19–26
PhD thesis 35. Atif M, Strazdins P (2014) Adaptive parallel application resource
12. Belady CL (2012) In the data center, power and cooling costs more than remapping through the live migration of virtual machines. Futur Gener
the it equipment it supports. http://www.electronics-cooling.com/ Comput Syst 37:148–161
2007/02/in-the-data-center-power-and-cooling-costs-more-than-the- 36. Svärd P, Hudzia B, Tordsson J, Elmroth E, Svärd P, Hudzia B, Tordsson J,
it-equipment-it-supports/. Accessed 18 May 2016 Elmroth E (2011) Evaluation of delta compression techniques for
13. Fan X, Weber WD, Barroso LA (2007) Power provisioning for a efficient live migration of large virtual machines. In: Proceedings of the
warehouse-sized computer. In: Proceedings of the 34th Annual 7th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution
International Symposium on Computer Architecture. ACM, California Environments. ACM, California Vol. 46. pp 111–120
Vol. 35. pp 13–23 37. Riteau P, Morin C, Priol T (2011) Shrinker: Improving Live Migration of
14. Barham P, Dragovic B, Fraser K, Hand S, Harris T, Ho A, Neugebauer R, Virtual Clusters over WANs with Distributed Data Deduplication and
Pratt I, Warfield A (2003) Xen and the art of virtualization. In: Proceedings Content-Based Addressing. In: Proceedings of the 17th International
of the Nineteenth ACM Symposium on Operating Systems Principles. Conference on Parallel Processing and Distributed Computing - Volume
ACM, NY Vol. 37. p 164 Part I. Springer, Bordeaux. pp 431–442
15. Younge AJ, Henschel R, Brown JT, Laszewski GV, Qiu J, Fox GC (2011) 38. Soni G, Kalra M (2013) Comparative Study of Live Virtual Machine
Analysis of Virtualization Technologies for High Performance Computing Migration Techniques in Cloud. Int J Comput Appl 84(14):19–25
Environments. In: IEEE 4th International Conference on Cloud 39. Kapil D, Pilli ES, Joshi RC (2013) Live virtual machine migration
Computing. IEEE Computer Society, Washington. pp 1–8 techniques: Survey and research challenges. In: Proceedings of the 3rd
IEEE International Advance Computing Conference. IEEE, Ghaziabad. Proceedings of the 2013 ACM Cloud and Autonomic Computing
pp 963–969 Conference on - CAC ’13, Florida
40. Ahmad RW, Gani A, Siti SH, Shiraz M, Xia F, Madani SA (2015) Virtual 61. Suen CH, Kirchberg M, Lee BS (2011) Efficient migration of virtual
machine migration in cloud data centers: a review, taxonomy, and open machines between public and private cloud. In: 3rd IEEE International
research issues. J Supercomputing, 71(7):2473–2515 Conference on Cloud Computing Technology and Science, CloudCom
41. Ahmad RW, Gani A, Hamid SHA, Shiraz M, Yousafzai A, Xia F (2015) A 2011. IEEE, United States. pp 549–553
survey on virtual machine migration and server consolidation 62. Compute Engine - IaaS | Google Cloud Platform. https://cloud.google.
frameworks for cloud data centers. J Netw Comput Appl 52:11–25 com/compute/. Accessed 07 Sept 2017
42. Yamada H (2016) Survey on Mechanisms for Live Virtual Machine 63. Hines MR, Deshpande U, Gopalan K (2009) Post-copy live migration of
Migration and its Improvements. Inf Media Tech 11:101–115 virtual machines. ACM SIGOPS Oper Syst Rev 43(3):14–26
43. Kokkinos P, Kalogeras D, Levin A, Varvarigos E (2016) Survey: Live 64. Hines MR, Gopalan K (2009) Post-copy based live virtual machine
Migration and Disaster Recovery over Long-Distance Networks. ACM migration using adaptive pre-paging and dynamic self-ballooning. In:
Comput Surveys 49(2):1–36 Proceedings of the 2009 ACM SIGPLAN/SIGOPS International Conference
44. Sapuntzakis CP, Chandra R, Pfaff B, Chow J, Lam MS, Rosenblum M on Virtual Execution Environments. ACM Press, Washington. pp 51–60
65. Ard PS, Walsh S, Hudzia B, Tordsson J, Elmroth E (2013) The Noble Art of
(2002) Optimizing the migration of virtual computers. ACM SIGOPS Oper
Live VM Migration -Principles and Performance of precopy, postcopy
Syst Rev 36(SI):377–390
and hybrid migration of demanding workloads. Technical report, Tech
45. Nelson M, Lim BH, Hutchins G (2005) Fast transparent migration for
Report UMINF
virtual machines. In: Proceedings of the annual conference on USENIX
66. Voorsluys W, Broberg J, Venugopal S, Buyya R (2009) Cost of virtual
Annual Technical Conference. ACM, Berkeley. pp 25–25
machine live migration in clouds: A performance evaluation. Lect Notes
46. Huang W, Gao Q, Liu J, Panda DK (2007) High performance virtual Comput Sci 5931 LNCS:254–265
machine migration with RDMA over modern interconnects. In: 2007 IEEE 67. Kuno Y, Nii K, Yamaguchi S (2011) A study on performance of processes
International Conference on Cluster Computing. IEEE, Washington. in migrating virtual machines. In: 10th International Symposium on
pp 11–20 Autonomous Decentralized Systems. IEEE, Kobe, Japan. pp 567–572
47. Luo Y, Zhang B, Wang X, Wang Z, Sun Y, Chen H (2008) Live and 68. Feng X, Tang J, Luo X, Jin Y (2011) A performance study of live VM
incremental whole-system migration of virtual machines using migration technologies: VMotion vs XenMotion. In: Proc. of
block-bitmap. In: IEEE International Conference on Cluster Computing. SPIE-OSA-IEEE Asia Communications and Photonics. IEEE, Shanghai.
IEEE, Tsukuba. pp 99–106 pp 83101B-1-6
48. Verma A, Ahuja P, Neogi A (2008) pMapper: Power and migration cost 69. Liu H, Xu CZ, Jin H, Gong J, Liao X, Xu CZ, Liao X, Jin H, Gong J, Liao X
aware application placement in virtualized systems. In: IFIP International (2011) Performance and energy modeling for live migration of virtual
Federation for Information Processing, vol. 5346 LNCS. pp 243–264. machines. In: Proceedings of the 20th International Symposium on High
doi:10.1007/978-3-540-89856-6_13 Performance Distributed Computing. ACM, California. pp 171–182
49. Sammy K, Shengbing R, Wilson C (2012) Energy Efficient Security 70. Akoush S, Sohan R, Rice A, Moore AW, Hopper A (2010) Predicting the
Preserving VM Live Migration In Data Centers For Cloud Computing. Performance of Virtual Machine Migration. In: IEEE International
J Comput Sci 9(2):33–39 Symposium on Modeling, Analysis and Simulation of Computer and
50. Beloglazov A, Abawajy J, Buyya R (2012) Energy-aware resource Telecommunication Systems. IEEE, Miami Beach. pp 37–46
allocation heuristics for efficient management of data centers for Cloud 71. Huang D, Ye D, He Q, Chen J, Ye K, Huang D, Ye D, He Q, Chen J, Ye K
computing. Futur Gener Comput Syst 28(5):755–768. doi:10.1016/j. (2011) Virt-LM: a benchmark for live migration of virtual machine. In:
future.2011.04.017 Proceeding of the Second Joint WOSP/SIPEW International Conference
51. Kikuchi S, Matsumoto Y (2011) Performance modeling of concurrent live on Performance Engineering. ACM, Karlsruhe Vol. 36. pp 307–316
migration operations in cloud computing systems using prism 72. Wu Y, Zhao M (2011) Performance Modeling of Virtual Machine Live
probabilistic model checker. In: IEEE 4th International Conference on Migration. In: IEEE 4th International Conference on Cloud Computing.
Cloud Computing. IEEE, DC. pp 49–56 IEEE, DC. pp 492–499
73. Watts up? https://www.wattsupmeters.com/secure/index.php.
52. Xu F, Liu F, Liu L, Jin H, Li B, Li B (2014) iAware: Making live migration of
Accessed 07 Sept 2017
virtual machines interference-aware in the cloud. IEEE Trans Comput
74. Cerroni W, Callegati F (2014) Live migration of virtual network functions
63(12):3012–3025
in cloud-based edge networks. In: IEEE International Conference on
53. Yin F, Liu W, Song J (2014) Live Virtual Machine Migration with
Communications. IEEE, Sydney. pp 2963–2968
Optimized Three-Stage Memory Copy. In: Future Information
75. Deshpande U, You Y, Chan D, Bila N, Gopalan K (2014) Fast server
Technology. Springer, Berlin. pp 69–75
deprovisioning through scatter-gather live migration of virtual
54. Bala A, Chana I (2012) Fault Tolerance - Challenges, Techniques and
machines. In: IEEE 7th International Conference on Cloud Computing,
Implementation in Cloud Computing. Int J Comput Sci Issues
CLOUD. IEEE, AK. pp 376–383
9(1):288–293
76. SPEC CPU® 2006. https://www.spec.org/cpu2006/. Accessed 07 Sept
55. Shrivastava V, Zerfos P, Lee KW, Jamjoom H, Liu YH, Banerjee S (2011) 2017
Application-aware virtual machine migration in data centers. In: 77. Welcome to Apache™ Hadoop®! http://hadoop.apache.org/. Accessed
Proceedings - IEEE INFOCOM. IEEE, Shanhai. pp 66–70 07 Sept 2017
56. Mishra M, Das A, Kulkarni P, Sahoo A (2012) Dynamic resource 78. The Netperf Homepage. https://hewlettpackard.github.io/netperf/.
management using virtual machine migrations. IEEE Commun Mag Accessed 07 Sept 2017
50(9):34–40 79. SPECweb2005. https://www.spec.org/web2005/. Accessed 07 Sept 2017
57. Dong J, Jin X, Wang H, Li Y, Zhang P, Cheng S (2013) Energy-Saving 80. NAS Parallel Benchmarks. http://www.nas.nasa.gov/Software/NPB.
virtual machine placement in cloud data centers. In: Proceedings - 13th Accessed 08 Nov 2016
IEEE/ACM International Symposium on Cluster, Cloud, and Grid 81. Zhang W, Lam KT, Wang CL (2014) Adaptive live VM migration over a
Computing, CCGrid 2013. IEEE/ACM, Delft, the Netherlands. pp 618–624 WAN: Modeling and implementation. In: IEEE International Conference
58. Zheng J, Ng TSE, Sripanidkulchai K (2011) Workload-aware live storage on Cloud Computing, CLOUD, AK. pp 368–375
migration for clouds. In: Proceedings of the 7th ACM SIGPLAN/SIGOPS 82. Deshpande U, Keahey K (2017) Traffic-sensitive Live Migration of Virtual
International Conference on Virtual Execution Environments. ACM, Machines. Future Gene Comput Syst 72:118–128.
California Vol. 46. pp 133–144 doi:10.1016/j.future.2016.05.003
59. Bai W, Geng W (2014) Operation and Maintenance Management 83. Ye K, Jiang X, Huang D, Chen J, Wang B, Kejiang Y, Xiaohong J, Dawei H,
Strategy of Cloud Computing Data Center. Adv Sci Technol Lett Jianhai C, Bei W, Ye K, Jiang X, Huang D, Chen J, Wang B (2011) Live
78(MulGrab):5–9 Migration of Multiple Virtual Machines with Resource Reservation in
60. Hu W, Hicks A, Zhang L, Dow EM, Soni V, Jiang H, Bull R, Matthews JN Cloud Computing Environments. In: IEEE 4th International Conference
(2013) A quantitative study of virtual machine live migration. In: on Cloud Computing. IEEE, DC. pp 267–274
84. Deshpande U, Wang X, Gopalan K (2011) Live gang migration of virtual 105. Svard P, Tordsson J, Hudzia B, Elmroth E (2011) High Performance Live
machines. In: Proceedings of the 20th International Symposium on High Migration through Dynamic Page Transfer Reordering and Compression.
Performance Distributed Computing. ACM Press, California. pp 135–146 In: IEEE Third International Conference on Cloud Computing
85. Lu T, Stuart M, Tang K, He X (2014) Clique migration: Affinity grouping of Technology and Science. IEEE, Divani Caravel Athens. pp 542–548
virtual machines for inter-cloud live migration. In: Proceedings - 9th IEEE 106. Sahni S, Varma V (2012) A Hybrid Approach to Live Migration of Virtual
International Conference on Networking, Architecture, and Storage. Machines. In: IEEE International Conference on Cloud Computing in
IEEE, Tianjin, China. pp 216–225 Emerging Markets (CCEM). IEEE, KA. pp 1–5
86. Lu H, Xu C, Cheng C, Kompella R, Xu D (2015) vHaul : Towards Optimal 107. Cedric JL, Bockhaven V Cryptanalysis of, and practical attacks against
Scheduling of Live Multi-VM Migration for Multi-tier Applications. In: IEEE E-Safenet encryption. Technical report, University of Amsterdam,
8th International Conference on Cloud Computing. IEEE, New York. Netherlands
pp 453–460 108. Nocentino A, Ruth PM (2009) Toward dependency-aware live virtual
87. Olio Incubation Status - Apache Incubator. http://incubator.apache.org/ machine migration. In: Proceedings of the 3rd International Workshop
projects/olio.html. Accessed 07 Sept 2017 on Virtualization Technologies in Distributed Computing. ACM,
88. Forsman M, Glad A, Lundberg L, Ilie D (2015) lgorithms for automated Barcelona. pp 59–66
live migration of virtual machines. J Syst Softw 101:110–126 109. Babu BS, Savithramma RM (2016) Optimised pre-copy live VM migration
89. Varga A, Hornig R (2008) An overview of the OMNeT++ simulation approach for evaluating mathematical expression by dependency
environment. In: Proceedings of the 1st international conference on identification. Int J Cloud Comput 5(4):247
Simulation tools and techniques for communications, networks and 110. Koto A, Yamada H, Ohmura K, Kono K (2012) Towards Unobtrusive VM
systems & workshops. ACM, Marseille. p 60 Live Migration for Cloud Computing Platforms. In: Proceedings of the
90. Sun G, Liao D, Anand V, Zhao D, Yu H (2016) A new technique for Asia-Pacific Workshop on Systems. ACM, Seoul. pp 1–6
efficient live migration of multiple virtual machines. Future Gene 111. Ma F, Liu F, Liu Z (2010) Live virtual machine migration based on
Comput Syst 55:74–86 improved pre-copy approach. In: IEEE International Conference on
91. Shribman A, Hudzia B (2012) Pre-Copy and Post-Copy VM Live Migration Software Engineering and Service Sciences. IEEE, Beijing. pp 230–233
for Memory Intensive Applications. In: Proceedings of the 18th 112. Ibrahim KZ, Hofmeyr S, Iancu C, Roman E (2011) Optimized pre-copy live
International Conference on Parallel Processing Workshops. Springer, migration for memory intensive applications. In: International
Rhodes Island. pp 539–547 Conference for High Performance Computing, Networking, Storage and
92. Cerroni W (2014) Multiple virtual machine live migration in federated Analysis. ACM/IEEE, Seattle. pp 1–11
cloud systems. In: Proceedings - INFOCOM IEEE. IEEE Computer Society, 113. Jin H, Gao W, Wu S, Shi X, Wu X, Zhou F (2011) Optimizing the live
ON. pp 25–30 migration of virtual machine by CPU scheduling. J Netw Comput Appl
93. Celesti A, Tusa F, Villari M, Puliafito A (2010) Improving virtual machine 34(4):1088–1096
migration in federated cloud environments. In: 2nd International 114. Zaw EP, Thein NL (2012) Improved Live VM Migration using LRU and
Conference on Evolving Internet. IEEE, Rhode Island. pp 61–67 Splay Tree Algorithm. Int J Comput Sci Telecommun J 3(3):1–7
94. Kumar Bose S, Brock S, Skeoch R, Shaikh N, Rao S (2011) Optimizing live 115. Yong C, Yusong L, Yi G, Runzhi L, Zongmin W (2013) Optimizing Live
migration of virtual machines across wide area networks using Migration of Virtual Machines with Context Based Prediction Algorithm.
integrated replication and scheduling. In: IEEE International Systems In: International Workshop on Cloud Computing and Information
Conference. IEEE, QC. pp 97–102 Security. Atlantis Press, Shanhai. pp 441–444
116. Mohan A, Shine S (2013) An optimized approach for live VM migration
95. Bose SK, Brock S, Skeoch R, Rao S, Kumar Bose S, Brock S, Skeoch R,
using log records. In: 4th International Conference on Computing,
Shaikh N, Rao S (2011) CloudSpider: Combining Replication with
Communications and Networking Technologies. IEEE, Tiruchengode.
Scheduling for Optimizing Live Migration of Virtual Machines across
pp 4–7
Wide Area Networks. In: 11th IEEE/ACM International Symposium on
Cluster, Cloud and Grid Computing. IEEE, CA. pp 13–22 117. Zhang J, Ren F, Lin C (2014) Delay guaranteed live migration of Virtual
Machines. In: Proceedings - IEEE INFOCOM. IEEE, ON. pp 574–582
96. Grid5000. https://www.grid5000.fr/mediawiki/index.php/Grid5000:
118. Desai MR, Patel HB (2016) Performance Measurement of Virtual Machine
Home. Accessed 08 Sept 2017
Migration Using Pre-copy Approach in cloud computing. In:
97. Redis. https://redis.io/. Accessed 08 Sept 2017
Proceedings of the Second International Conference on Information
98. Zhang X, Huo Z, Ma J, Meng D (2010) Exploiting Data Deduplication to and Communication Technology for Competitive Strategies. ACM Press,
Accelerate Live Virtual Machine Migration. In: IEEE International Udaipur. pp 1–4
Conference on Cluster Computing. IEEE, Crete. pp 88–96 119. Liu H, Jin H, Liao X, Hu L, Yu C (2009) Live migration of virtual machine
99. Jo C, Gustafsson E, Son J, Egger B, Jo C, Gustafsson E, Son J, Egger B based on full system trace and replay. In: Proceedings of the 18th ACM
(2013) Efficient live migration of virtual machines using shared storage. International Symposium on High Performance Distributed Computing
In: Proceedings of the 9th ACM SIGPLAN/SIGOPS International - HPDC ’09. ACM Press, Garching. pp 101–110
Conference on Virtual Execution Environments. ACM Press, New York 120. Liu W, Fan T (2011) Live migration of virtual machine based on
Vol. 48. pp 41–50 recovering system and CPU scheduling. In: 6th IEEE Joint International
100. Wood T, Ramakrishnan KK, Shenoy P, Van Der Merwe J, Hwang J, Liu G, Information Technology and Artificial Intelligence Conference. IEEE,
Chaufournier L (2015) CloudNet: Dynamic pooling of cloud resources by Chongqing. pp 303–307
live WAN migration of virtual machines. IEEE/ACM Transactions on 121. Hirofuchi T, Nakada H, Itoh S, Sekiguchi S (2010) Enabling Instantaneous
Networking 23(5):1568–1583 Relocation of Virtual Machines with a Lightweight VMM Extension. In:
101. Jaswal T, Kaur K (2016) An Enhanced Hybrid Approach for Reducing 10th IEEE/ACM International Conference on Cluster, Cloud and Grid
Downtime, Cost and Power Consumption of Live VM Migration. In: Computing. IEEE, Melbourne. pp 73–83
Proceedings of the International Conference on Advances in 122. SPECweb2005. https://www.spec.org/web2005/. Accessed 09 Sept 2017
Information Communication Technology & Computing. ACM, Bikaner 123. Ashino Y, Nakae M (2012) Virtual Machine Migration Method between
102. Jin H, Li D, Wu S, Shi X, Pan X (2009) Live virtual machine migration with Different Hypervisor Implementations and Its Evaluation. In: Proceedings
adaptive memory compression. In: Proceedings - IEEE International - 26th IEEE International Conference on Advanced Information
Conference on Cluster Computing. IEEE, LA. pp 1–10 Networking and Applications Workshops, WAINA 2012. IEEE, Fukuoka.
103. Jin H, Deng L, Wu S, Shi X, Chen H, Pan X (2014) MECOM : Live migration pp 1089–1094
of virtual machines by adaptively compressing memory pages. Futur 124. Hines MR, Gopalan K (2007) MemX: supporting large memory workloads
Gener Comput Syst 38:23–35 in Xen virtual machines. In: Proceedings of the 2nd International
104. Hacking S, Hudzia B (2009) Improving the live migration process of large Workshop on Virtualization Technology in Distributed Computing. ACM
enterprise applications. In: Proceedings of the 3rd International Press, Reno. pp 1–8
Workshop on Virtualization Technologies in Distributed Computing. 125. Anala MR, Shetty J, Shobha G (2013) A framework for secure live
ACM, Barcelona. pp 51–58 migration of virtual machines. In: 2013 International Conference on
1. use such content for the purpose of providing other users with access on a regular or large scale basis or as a means to circumvent access
control;
2. use such content where to do so would be considered a criminal or statutory offence in any jurisdiction, or gives rise to civil liability, or is
otherwise unlawful;
3. falsely or misleadingly imply or suggest endorsement, approval , sponsorship, or association unless explicitly agreed to by Springer Nature in
writing;
4. use bots or other automated methods to access the content or redirect messages
5. override any security feature or exclusionary protocol; or
6. share the content in order to create substitute for Springer Nature products or services or a systematic database of Springer Nature journal
content.
In line with the restriction against commercial use, Springer Nature does not permit the creation of a product or service that creates revenue,
royalties, rent or income from our content or its inclusion as part of a paid for service or for other commercial gain. Springer Nature journal
content cannot be used for inter-library loans and librarians may not upload Springer Nature journal content on a large scale into their, or any
other, institutional repository.
These terms of use are reviewed regularly and may be amended at any time. Springer Nature is not obligated to publish any information or
content on this website and may remove it or features or functionality at our sole discretion, at any time with or without notice. Springer Nature
may revoke this licence to you at any time and remove access to any copies of the Springer Nature journal content which have been saved.
To the fullest extent permitted by law, Springer Nature makes no warranties, representations or guarantees to Users, either express or implied
with respect to the Springer nature journal content and all parties disclaim and waive any implied warranties or warranties imposed by law,
including merchantability or fitness for any particular purpose.
Please note that these rights do not automatically extend to content, data or other material published by Springer Nature that may be licensed
from third parties.
If you would like to use or distribute our Springer Nature journal content to a wider audience or on a regular basis or in any other manner not
expressly permitted by these Terms, please contact Springer Nature at