Vmware Cluster
Vmware Cluster
Contents
Overview ............................................................................................................................................... 3
More protection of your cluster with NAKIVO Backup & Replication ................................ 65
Conclusion ............................................................................................................................................ 67
2
eBook
Overview
Modern information technology evolves rapidly, and the resulting innovations are used in
the growing number of industries. While these IT solutions provide automation, as well as
ensuring rational usage of natural and human resources, their hardware requirements are
gradually increasing. Even a powerful server can be overloaded with multiple computing
tasks. For better performance and reliability, servers can now be connected with each other
over networks. For this purpose, clustering technologies are widely used. This eBook explains
what clusters are, what issues you can resolve with clustering, and how to deploy clusters in
your VMware environment.
High-Performance Computing (HPC) clusters are also called “parallel clusters”. They provide
a single system image. This means that an application can be executed on any of the servers
within the cluster. HPC clusters are used to execute computation-intensive and data-intensive
tasks by running a job on multiple nodes simultaneously, thus enhancing application
performance.
High Availability (HA) clusters are also referred to as “failover clusters” and deliver robust
operation with the minimal amount of downtime. Redundant storage, software instances,
and networking provide continued service when system components fail. HA clusters usually
use a heartbeat private network connection to monitor the health and status of each node in
the cluster.
Load Balancing (LB) clusters ensure better performance. In LB clusters, tasks are distributed
between nodes to load hardware more rationally and avoid overloading each server if there
are enough computing resources available.
In VMware vSphere, you can deploy two of the above types of clusters working on the virtual
machine layer: HA clusters and LB clusters (the latter of which are called Distributed Resource
Scheduler (DRS) clusters in the context of VMware vSphere).
3
eBook
4
eBook
This might prompt the DRS to move these VMs to another host with more free resources
available. The feature helps save management time that would otherwise be spent on
monitoring and maintaining the infrastructure.
NOTE: When choosing NAS or SAN solutions, use an authorized vendor that meet your
requirements for your production environment. Set up the storage according to the
manufacturer documentation. Multiple NAS or SAN devices can be used to create a VMware
cluster.
All volumes on the ESXi hosts must use the same volume names.
At least one VM must have the Active Directory Domain Controller has to be installed.
At least one VM must have vCenter installed.
To create a VMware cluster, the following steps must be performed (each is explored
with an in-depth walkthrough in this section):
5
eBook
6
eBook
Select the Installer from the boot menu (see Figure 1.1).
Figure 1.1
NOTE: If your system hangs at the “user loaded successfully” stage (see Figure 1.2), this may
be due to insufficient RAM. Check the amount of memory available. Press Alt+F12 to view
details.
7
eBook
Figure 1.2
If everything is okay, the “welcome” installation screen appears (see Figure 1.3).
Figure 1.3
8
eBook
Figure 1.4
Figure 1.5
9
eBook
Figure 1.6
NOTE: If there are less than 2 CPU cores, an error message appears (see Figure 1.7).
Figure 1.7
10
eBook
Figure 1.8
8. Press Install and wait for the installation progress to complete (see Figure 1.9).
Figure 1.9
7. You can see the “Installation Complete” message. Press Enter (see Figure 1.10).
11
eBook
Figure 1.10
Figure 1.11
12
eBook
Figure 1.12
10. Select Configure Management Network from the menu, and set up the host name along
with the IP address manually – for example, [Link] (see Figure 1.13).
Figure 1.13
11. Press Yes to confirm the network changes (see Figure 1.14).
13
eBook
Figure 1.14
12. Now, you can download VMware vSphere Client. Open your web browser and enter the
URL of your ESXi server (see Figure 1.15).
14
eBook
Figure 1.15
Follow the Download vSphere Client for Windows link and download the installer from the
official VMware website.
13. Install VMware vSphere Client by following the steps the installation wizard guides you
through (see Figure 1.16).
15
eBook
Figure 1.16
14. Log in to ESXi Server via vSphere Client by entering the user name and password you
specified during the ESXi installation (see Figure 1.17).
16
eBook
Figure 1.17
You now have one ESXi server installed. The second one can be installed in the same way.
This is how the vSphere Client interface looks (see Figure 1.18):
Figure 1.18
17
eBook
NOTE: Consult the table on the VMware website displaying the compatibility of the various
versions of Microsoft Windows Server with the version of vCenter you are using to make sure
that the installation can work properly. In the walkthrough that follows, Windows Server 2008
R2 x64 is used, as this version of Windows Server is compatible with most versions of vCenter.
NOTE: The instance of Windows Server you use as the domain controller can be installed on a
physical server or a virtual machine.
1. Press Win+R (or go to Start -> Run) and type “dcpromo” in the Run window (see Figure
2.1).
Figure 2.1
18
eBook
Figure 2.2
2. Tick the checkbox near Use advanced mode installation and click Next (see Figure 2.3).
Figure 2.3
3. Read the information on OS compatibility and click Next (see Figure 2.4).
19
eBook
Figure 2.4
4. For the first Domain Controller setup in your infrastructure, select Create a new domain
in a new forest (see Figure 2.5).
Figure 2.5
20
eBook
Figure 2.6
6. Enter Accept the Domain NetBIOS name, or alter the one provided, if necessary.
(see Figure 2.7).
Figure 2.7
21
eBook
Figure 2.8
NOTE: If there are no Windows Server operating systems older than Windows Server 2008
that you want added to the domain as domain controllers, set the forest functional level to
Windows Server 2008.
22
eBook
Figure 2.9
9. Under Additional Domain Controller Options, tick the checkbox near DNS server
(see Figure 2.10).
23
eBook
Figure 2.10
10. As this is an internal domain, click Yes to continue (see Figure 2.11).
Figure 2.11
11. Set the location for the Domain Controller files. You can leave the default location
(see Figure 2.12).
Figure 2.12
24
eBook
12. Set the password for the Domain Controller Administrator account. The password must
meet the complexity requirements set in the default Domain Controller security policy
(see Figure 2.13).
Figure 2.13
13. View the summary and click Next to start the installation (see Figure 2.14).
25
eBook
Figure 2.14
Figure 2.15
14. Click Finish and reboot the server (see Figure 2.16).
Figure 2.16
26
eBook
NOTE: Do not install vCenter Server on the machine with Active Directory Domain Controller
installed.
vCenter can be installed either on a physical machine or a virtual machine. Installing vCenter
on a virtual machine offers a number of advantages, including the following:
The following are the minimum requirements for vCenter Server installation:
64-bit operating system, such as Windows Server 2008R2 x64 (this is required for some
versions of vSphere, but not all)
CPU: 2 GHz or faster Dual Core 64-bit processor
RAM: 8 GB minimum (requirements can increase with higher numbers of virtual machines
in vSphere)
Disk storage: 40 GB minimum (this varies depending on the database type and the
number of VMs)
27
eBook
Figure 3.1
2. In the VMware vCenter Installer window, select vCenter Server for Windows and click
Install (see Figure 3.2).
Figure 3.2
3. Read and accept the terms of the End User License Agreement displayed. Click Next (see
Figure 3.3).
28
eBook
Figure 3.3
4. Select the deployment type. In this installation walkthrough, the Embedded Deployment
was used. Click Next (see Figure 3.4).
Figure 3.4
29
eBook
Figure 3.5
5. Set vCenter Single Sign-On domain and click Next (see Figure 3.6).
Figure 3.6
7. Specify a vCenter Server Service Account for logging into vCenter with the vSphere Client.
Click Next (see Figure 3.7).
30
eBook
Figure 3.7
Figure 3.8
31
eBook
9. Configure ports. You may leave the default settings (see Figure 3.9).
Figure 3.9
32
eBook
Figure 3.10
11. Tick the checkbox near Join the VMware Customer Experience Improvement Program,
if you want to participate (see Figure 3.11).
Figure 3.11
33
eBook
12. View the installation summary and click Install (see Figure 3.12).
Figure 3.12
Figure 3.13
34
eBook
You can now launch vSphere Web Client (see Figure 3.14).
Figure 3.14
35
eBook
Figure 4.1
3. Click Storage Devices to view the list of available devices (see Figure 4.2).
Figure 4.2
4. Go to Home -> vCenter -> Datastores. Click the Create a new datastore icon to add the
shared datastore you want (see Figure 4.3).
Figure 4.3
5. Select the IP address of the datastore you want to connect via network using the NFS, iSCSI,
or Fibre Channel protocols (see Figure 4.4).
36
eBook
Figure 4.4
NOTE: The recommended redundant storage network scheme is as follows: two ESXi hosts
connected via redundant network to a SAN with two storage processors. However, you can
use a NAS server for this purpose (see Figure 4.5).
Figure 4.5
37
eBook
1. In this scenario, the Domain Controller and vCenter Server are installed on physical
machines (see Figure 5.1). Bear in mind that if you use this setup initially, you can always use
VMware vCenter Converter to convert a physical machine into a virtual machine at any time.
You could thus migrate vCenter Server or Domain Controller to the vSphere environment
later.
Figure 5.1
2. In this scenario, vCenter Server is a virtual machine that is installed on an ESXi server, using
the CPU, RAM, and storage of the ESXi server (see Figure 5.2).
38
eBook
Figure 5.2
3. vCenter Server is a virtual machine running on an ESXi Server that uses CPU and RAM
of the ESXi server, but the virtual disk is stored on a shared datastore (see Figure 5.3). This
method of connecting hosts in a cluster allows you to use clustering features, such as High
Availability, the Distributed Resource Scheduler, and Fault Tolerance.
Figure 5.3
4. Domain Controller as well as vCenter Server are both installed and running on an ESXi
server. They are using CPU, RAM, and storage resources of the ESXi server (Figure 5.4).
39
eBook
Figure 5.4
5. Domain Controller and vCenter Server are running on ESXi Server. They are using CPU,
RAM, and storage of the ESXi server, but virtual disks of these VMs are stored on the shared
datastore (Figure 5.5). This connection method offers similar advantages to Scenario 3.
Figure 5.5
40
eBook
NOTE: vSphere Client is a C#-based locally installed application for Windows only. vSphere
Web Client is a cross-platform web application. Both vSphere Client and vSphere Web Client
can connect to vCenter, or directly with ESXi hosts with the full range of administrative
functionality. Since vCenter Server 5.1, VMware recommends using the vSphere Web Client
to administer virtual environments. For older versions of vSphere Web Client, you may need
to install a Flash player plugin. The newest versions of vSphere Web Client use HTML5 rather
than Flash.
This is how the main screen of vSphere Web Client interface looks (see Figure 5.6):
Figure 5.6
2. In the left pane of the vSphere Web Client interface, select vCenter -> Datacenters –>
New Datacenter to create a new datacenter.
NOTE: A datacenter is a container for all the inventory objects required for a fully functional
environment for virtual machine operation. You can create multiple datacenters for each
department in your enterprise, or for different purposes, such as low- and high-performance
tasks. Your virtual machines can hot-migrate from one ESXi host to another ESXi host within
the same datacenter. However, they cannot migrate from a host in one datacenter to a host in
a different datacenter.
41
eBook
3. Type the name for your new datacenter (in our case – “Datacenter03”; (see Figure 5.7).
Figure 5.7
4. Add your ESXi host(s) to the Datacenter. Right-click on the newly created datacenter
and select Add Host. Enter the IP address and root credentials of your ESXi host on the
Connection Settings page of the Add Host Wizard. Do the same for each ESXi host you want
added to the datacenter.
Now, right-click on the newly created datacenter and select the Create New Cluster option
(see Figure 5.8).
42
eBook
Figure 5.8
5. Set a name for your cluster. Leave the checkboxes for the DRS and vSphere HA options
unticked (you can add these functionalities later; see Figure 5.9).
Figure 5.9
6. Add ESXi hosts to the cluster. Note that your ESXi hosts must belong to the same
datacenter. Click “Add Host…” (see Figure 5.10).
Figure 5.10
7. Enter the name or IP address of the ESXi host you want added to your cluster (see Figure
5.11).
43
eBook
Figure 5.11
8. Enter the username and password for the administrative account of the ESXi host (see
Figure 5.12). The root user is the administrator by default.
Figure 5.12
10. Leave the lockdown mode disabled and click Next (see Figure 5.13).
Figure 5.13
44
eBook
11. Set the resource pool. This option defines what to do with existing virtual machines and
resource pools on the ESXi host. If there are no virtual machines on the ESXi host, accept the
default option. Click Next (see Figure 5.14).
Figure 5.14
NOTE: Repeat steps 6 through 12 for each ESXi host that you want added in the cluster.
NOTE: You can view or change the settings of physical network controllers, virtual network
controllers, and virtual switches from your vSphere Web Client (see Figure 6.1).
45
eBook
Figure 6.1
Physical adapters are hardware network adapters of ESXi servers (see Figure 6.2).
Figure 6.2
The standard Maximum Transmission Unit (MTU) of a Frame is 1500 Bytes. In the Properties
tab, you can set the MTU value as high as 9000 Bytes to enable using Jumbo Frames, if
needed (see Figure 6.3).
46
eBook
Figure 6.3
The VMkernel network adapter is used to provide network connectivity for hosts as well as
handling system traffic for vSphere vMotion, IP storage, Fault Tolerance logging, and vSAN
(see Figure 6.4).
Figure 6.4
If there are more than two Network Interface Controllers (NICs) available in the ESXi server,
create two virtual switches. One of them should host the Service Console and VMkernel
(including iSCSI as well as vMotion traffic). The other should be dedicated to virtual machine
traffic. The NICs carrying the iSCSI traffic should be connected to redundant Ethernet
switches (see Figure 4.5 above).
There are two types of virtual switches (vSwitches) in vSphere: standard and distributed.
The standard virtual switch is configured manually on each host and is used for small
environments. The Distributed vSwitch allows you to manage networks for multiple hosts
from a single vCenter interface. Use the standard switch for the purposes of this walkthrough.
To edit virtual switches, go to Manage –> Networking –> Virtual switches (Figure 4.6 above).
47
eBook
Select a virtual switch and click the Edit Settings icon. In the Teaming and Failover tab, you
can set active adapters, standby adapters, and unused adapters.
Active adapters use the uplink if the network adapter connectivity is up and active.
Standby adapters use this uplink if one of the active physical adapters is down.
Figure 6.5
Here you can select VMkernel Network Adapter (see Figure 6.6).
Figure 6.6
48
eBook
Figure 6.7
5. Configure an IP address for the new VMkernel port and select the appropriate services
to be used for the cluster. These could include vMotion traffic, provisioning traffic, Fault
Tolerance logging, management traffic, vSphere replication traffic, vSphere replication NFC
traffic, and/or Virtual SAN traffic.
NOTE: If you already have a virtual adapter created, you can select this adapter and edit
settings by clicking the Edit Settings icon (see Figure 6.8).
Figure 6.8
Below, you can see an example of a vMotion virtual network scheme (see Figure 6.9):
49
eBook
Figure 6.9
vMotion is a feature that lets you hot-migrate powered-on virtual machines from one ESXi
host to another. Enable vMotion if you want to create a DRS or HA cluster. Separate the
vMotion network, the storage network, and the production network. This can help you
prevent overloading and reduce network bandwidth usage (see Figure 6.10).
NOTE: The production network is the network to which your physical computers
(workstations), routers, printers, etc. are connected.
Figure 6.10
Now that you have created a cluster, you can set up the DRS and/or HA features.
50
eBook
1. Go to vCenter -> Cluster and select your cluster (in the example, our cluster is named
“temp-cluster”)
2. Click Settings, select vSphere DRS and click Edit (see Figure 7.1).
Figure 7.1
51
eBook
Figure 7.2
Figure 7.3
VMware Distributed Power Management (DPM) supports three power management protocols
to bring a host out of standby mode:
Intelligent Platform Management Interface (IPMI);
Hewlett Packard Enterprise Integrated Lights-Out (iLO);
Wake-on-LAN (WOL).
52
eBook
Each of these protocols requires separate hardware support and configuration. If a host does
not support any of these protocols, this host cannot be put into a standby mode by the DPM.
If a host supports multiple protocols, they are used in the following order: IPMI, iLO, WOL.
NOTE: Affinity Rules allow you to control the placement of virtual machines that interact in
particular ways with each other. For example, if you run a database server, a web server, and
an application server on different virtual machines, and they interact closely, you can create
an affinity rule to place ensure they reside on the same ESXi host. This would reduce network
load and can increase performance.
Go to vCenter -> Hosts and clusters. Select your cluster. Then click the Manage tab ->
Settings -> DRS Rules (see Figure 7.4).
Figure 7.4
Choose from three types of affinity rules: Keep Virtual Machines Together (affinity), Separate
Virtual Machines (anti-affinity), and Virtual Machines to Hosts (affinity or anti-affinity)
53
eBook
Figure 7.5
4. Tick the Turn on vSphere HA checkbox and set the following options (see Figure 7.6).
Figure 7.6
Host Monitoring: If this option is enabled, frequent checks are performed to ensure each
ESXi host in the cluster is running. If a host failure occurs, the relevant virtual machines
are restarted on another host. Host Monitoring is also required for the VMware Fault
Tolerance recovery process to function properly. Remember to disable Host Monitoring
when performing network maintenance.
54
eBook
Virtual machine options. These options define how High Availability should react to host
failures and isolations:
• VM Restart Priority – determines the relative order in which virtual machines are
restarted after a host failure. You can specify the priority level for each VM: Disabled,
Low, Medium, or High. You can set your main VMs, such as those that run the Domain
Controller, the database server, and/or the email server, to restart with high priority.
• Host Isolation Response – three options are available here:
–– Leave powered on – when a network isolation occurs for the ESXi host, the state of
the virtual machines on the host remains unchanged. The virtual machines on the
isolated host continue to run, even if the host can no longer communicate with other
hosts in the cluster. This setting reduces the chances of a false positive.
–– Power off, then failover – when a network isolation occurs, all virtual machines
are powered off and restarted on another ESXi host. This is a hard stop. A power-
off response is initiated on the fourteenth second, and a restart is initiated on the
fifteenth second.
–– Shutdown, then failover – when a network isolation occurs, all virtual machines
running on that host are shut down via VMware Tools and restarted on another ESXi
host. This approach allows the services and programs that are running on the virtual
machines to be stopped correctly. If shutdown is not successful within 5 minutes, a
power-off response type is executed.
Admission control (see Figure 7.7). Admission control is used by vCenter to ensure that
sufficient resources are available in a cluster for failover protection. The cluster reserves
resources to allow failover for all running virtual machines on the specified number of
hosts.
Figure 7.7
55
eBook
Each Admission Control Policy has a separate Admission Control mechanism. Slots dictate
how many virtual machines can be powered on before vCenter triggers the “Out of
resources” notification. The Admission Control process is a function of vCenter, and not of
the ESXi host.
The percentage of Cluster Resources Reserved is the least restrictive and most flexible
Admission Control policy. 25% is the default reserved percentage, meaning that 25% of the
total CPU and total memory resource across the entire cluster is reserved for the cluster.
Failover hosts are the ESXi hosts that are reserved for a failover situation. Failover hosts
don’t factor into DRS recommendations or migrations, and virtual machines can’t run on
these hosts in the regular mode.
NOTE: Remember to enable Admission Control, because this option guarantees the ability
of virtual machines restart after a failure.
VM Monitoring. This service evaluates whether each virtual machine in the cluster is
running by checking for regular heartbeats and input/output activity from the VMware
Tools process running inside the guest. VM Monitoring is different from the host
monitoring in that the item being watched is an individual virtual machine, rather than an
ESXi host. If vSphere can’t detect VM heartbeats, the VM reboot happens. You can select
the level of sensitivity using a preset, or set the failure interval, the minimum uptime, and
the maximum per-VM resets manually (see Figure 7.8).
Figure 7.8
Datastore Heartbeating (see Figure 7.9). If the management network of an ESXi host
becomes isolated but the virtual machines are running, a restart signal is sent. Datastore
Heartbeating is used to determine more accurately the state of the ESXi host, even if the
management network fails. Thus, the feature reduces the probability of falsely triggering
the virtual machine reboot mechanism. There are locking mechanisms to prevent
56
eBook
concurrent usage of open files located on shared storage and avoid file corruption. HA
manages the existing Virtual Machine File System (VMFS) locking mechanism, which is also
called a “Heartbeat Region”; this is updated as long as the lock file exists. HA determines
that at least one file is opened on the VMFS volume by checking files specially created for
Datastore Heartbeating. These files are named in the VMname-hb format: WindowsVM-
hb, LinuxTest-hb, host1tst-hb, etc. You can find them in the .vSphere-HA directory which
is located on the shared datastore with vSphere Client. Go to Home -> Datastores ->
DatastoreName -> Manage -> Files. Don’t delete or modify these files.
Figure 7.9
5. Click OK to finish the High Availability setup and wait while vSphere HA is configured (see
Figure 7.10).
Figure 7.10
57
eBook
In High Availability mode, virtual machines need some time to load on another ESXi host after
the ESXi host on which they were running fails. With Fault Tolerance, a virtual machine has a
copy running on another ESXi host with disabled VM network connection. If the ESXi host with
the primary copy of a VM fails, the secondary copy on another ESXi host just needs to have
networking enabled; this is why the migration process looks seamless. If there are more than
two ESXi servers included in the cluster, vSphere HA runs the replica of the VM on the second
ESXi server at the moment of failure, then creates a new VM replica on a third ESXi server.
58
eBook
Figure 8.1
A confirmation message appears for vSphere version 5.5, informing you of disk conversion
(see Figure 8.2). If you turn on Fault Tolerance, thin-provisioned disks, as well as disks that zero
out blocks as they were written to (lazy-zeroed thick-provisioned disks), become disks with
all blocks zeroed out (eager-zeroed, thick-provisioned disks). This conversion means that the
virtual machine uses more disk space and needs some processing time. This warning does
not appear in vSphere 6, as thin-provisioned disks are supported.
Figure 8.2
59
eBook
Figure 8.3
With “High” Latency Sensitivity, the ESXi host provides vCPU access to physical CPU while
calculating the actual CPU load. With this option enabled, a virtual machine processor can
interact directly with the physical processor, without using the VMkernel scheduler. Thus, the
Latency Sensitivity mode is useful for virtual machines demanding high performance.
Here is an example of how High Availability and Fault Tolerance features work.
Both of ESXi servers are running in a High Availability cluster. The virtual machine VM2 is
running on ESXi Server 1 with the Fault Tolerance option enabled, and has an exact replica
with disabled networking running on ESXi Server2. VM1 is also running on ESXi Server 1, but
the Fault Tolerance option is disabled for this virtual machine (see Figure 8.4):
60
eBook
Figure 8.4
Now, a failure occurs for ESXi Server 1. VM2, which was running on ESXi Server 1, also fails,
but the replica of VM2 that is still running on ESXi Server 2 becomes reachable in an instant;
networking is enabled for the replica by the automatic failover protection of the VMware Fault
Tolerance feature. VM2’s failover is seamless and instant. VM1 becomes unreachable because
of the same Server 1 failure, but since there is no replica of VM1, this virtual machine must be
migrated to ESXi Server 2. Loading VM1’s operating system and other services may take some
time (see Figure 8.5):
Figure 8.5
If you decide to turn off Fault Tolerance, click on the respective virtual machine and select: All
vCenter Actions –> Fault Tolerance –> Turn Off Fault Tolerance.
61
eBook
NOTE: There is a difference between disabling Fault Tolerance and turning off Fault Tolerance.
If you disable FT, the secondary virtual machines are preserved with their configuration and
history. Using this option allows you to re-enable FT in the future. Turning off VMware FT
deletes all secondary virtual machines, their configurations, and their history. Use this option
if you do not plan on re-enabling VMware FT (see Figure 8.6).
Figure 8.6
Figure 9.1
62
eBook
Figure 9.2
3. Right-click the ESXi host you want to remove from the cluster and select Move To... (see
Figure 9.3).
Figure 9.3
63
eBook
Figure 9.4
5. Right-click the ESXi host and select Exit Maintenance Mode (see Figure 9.5).
Figure 9.5
64
eBook
65
eBook
66
eBook
Conclusion
VMware vSphere is a virtual environment with a long list of features that help manage virtual
machines, provide great capability, reliability, and scalability. Clustering technologies are
widely used in vSphere to connect servers over the network and achieve better performance
in executing resource-intensive tasks. VMware supports creating Distributed Resource
Scheduler (DRS) clusters and High Availability (HA) clusters. Creating a DRS cluster helps
improve performance by rational usage of computing resources. A HA cluster reduces the
downtime of virtual machines in the event of failure by restarting VMs on another host via
redundant network. Fault Tolerance feature of HA clusters ensures avoiding the downtime
and provides for the seamless migration of virtual machines from the failed host to the
running host. That is vital for business critical processes. Using vSphere HA and DRS together
combines automatic failover with load balancing. This helps provide a more balanced cluster
after vSphere HA moves virtual machines to different hosts.
High Availability and Fault Tolerance do not replace the need for data backup. In the event of
cluster usage, virtual machines are stored on a shared datastore and should be backed up to
another storage. A combination of VMware cluster features with backup ensures the efficient
resource management, increased reliability, and protection.
67
eBook
About NAKIVO
The winner of a “Best of VMworld 2018” and the Gold Award for Data Protection, NAKIVO is a
US corporation dedicated to developing the ultimate VM backup and site recovery solution.
With 20 consecutive quarters of double-digit growth, 5-star online community reviews,
97.3% customer satisfaction with support, and more than 10,000 deployments worldwide,
NAKIVO delivers an unprecedented level of protection for VMware, Hyper-V, and Amazon EC2
environments.
As a unique feature, NAKIVO Backup & Replication runs natively on leading storage systems
including QNAP, Synology, ASUSTOR, Western Digital, and NETGEAR to deliver up to 2X
performance advantage. The product also offers support for high-end deduplication
appliances including Dell/EMC Data Domain and NEC HYDRAstor. Being one of the fastest-
growing data protection software vendors in the industry, NAKIVO provides a data protection
solution for major companies such as Coca-Cola, Honda, and China Airlines, as well as works
with over 3,000 channel partners in 137 countries worldwide. Learn more at [Link]
© 2018 NAKIVO, Inc. All rights reserved. All trademarks are the property of their respective owners.