Nutanix Clusters AWS122
Nutanix Clusters AWS122
on AWS Deployment
and User Guide
Cloud Clusters (NC2) Hosted
November 7, 2023
Contents
NC2 Deployment...................................................................................39
Creating an Organization..........................................................................................................39
Updating an Organization............................................................................................... 40
Adding an AWS Cloud Account................................................................................................ 41
Deactivating a Cloud Account........................................................................................ 43
Reconnecting a Cloud Account...................................................................................... 44
Adding a Cloud Account Region..................................................................................... 44
Updating AWS Stack Configurations............................................................................... 45
Creating a Cluster.................................................................................................................... 47
AWS VPC Endpoints for S3...................................................................................................... 61
Creating a Gateway Endpoint......................................................................................... 61
Associating Route Tables With the Gateway Endpoint.....................................................63
ii
NC2 Payment Methods..........................................................................69
Nutanix Licenses for NC2.........................................................................................................70
New Portfolio Licenses................................................................................................... 70
Legacy Portfolio Licenses............................................................................................... 73
Managing Licenses......................................................................................................... 75
Subscription Plans for NC2...................................................................................................... 76
NC2 Subscription Workflow......................................................................................................77
Nutanix Direct Subscription............................................................................................ 78
Subscribe to NC2 From AWS Marketplace...................................................................... 82
Changing Payment Method.......................................................................................................88
Canceling the Subscription Plan.............................................................................................. 91
Billing Management.................................................................................................................. 94
Viewing Billing and Usage Details...................................................................................94
Using the Usage Analytics API....................................................................................... 96
Cluster Management...........................................................................126
Updating the Cluster Capacity............................................................................................... 126
Manually Replacing a Host..................................................................................................... 129
Creating a Heterogeneous Cluster......................................................................................... 130
Hibernate and Resume in NC2............................................................................................... 131
Hibernating Your NC2 Cluster.......................................................................................131
Resuming an NC2 Cluster.............................................................................................132
Limitations in Hibernate and Resume............................................................................133
Terminating a Cluster............................................................................................................. 134
Multicast Traffic Management.................................................................................................134
Configuring AWS Transit Gateway for Multicast............................................................ 139
AWS Events in NC2................................................................................................................ 140
Displaying AWS Events................................................................................................. 141
Viewing Licensing Details.......................................................................................................142
Support Log Bundle Collection...............................................................................................142
iii
Cluster Protect Configuration............................................................. 143
Prerequisites for Cluster Protect............................................................................................ 144
Limitations of Cluster Protect................................................................................................. 145
Protecting NC2 Clusters......................................................................................................... 146
Creating S3 Buckets..................................................................................................... 147
Protecting Prism Central Configuration.........................................................................147
Deploying Multicloud Snapshot Technology.................................................................. 150
Protecting UVM and Volume Groups Data.....................................................................152
Disabling Cluster Protect.............................................................................................. 155
Recovering NC2 Clusters....................................................................................................... 156
Setting Clusters to Failed State.................................................................................... 157
Recreating a Cluster..................................................................................................... 159
Recovering Prism Central and MST.............................................................................. 165
Recovering UVM and Volume Groups Data................................................................... 167
Reprotecting Clusters and Prism Central...................................................................... 170
CLI Commands Library...........................................................................................................171
Cost Analytics.....................................................................................205
Integrating Cost Governance with NC2.................................................................................. 205
Displaying Cost Analytics in the Cost Governance Console.................................................... 205
iv
Integration with Third-Party Backup Solutions.............................................................. 209
Routine Maintenance.............................................................................................................. 209
Monitoring Certificates................................................................................................. 209
Nutanix Software Updates............................................................................................ 209
Managing Nutanix Licenses.......................................................................................... 209
System Credentials....................................................................................................... 210
Managing Access Keys and AWS Service Limits........................................................... 210
Emergency Maintenance........................................................................................................ 210
Automatic Node Failure Detection................................................................................ 210
Troubleshooting Deployment Issues....................................................................................... 211
Documentation Support and Feedback...................................................................................211
Support.................................................................................................................................. 212
AWS Support................................................................................................................ 212
Copyright............................................................................................ 213
v
ABOUT THIS DOCUMENT
This user guide describes the deployment processes for NC2 on AWS. The guide provides instructions
for setting up the Nutanix resources required for NC2 on AWS deployment, subscribing to NC2 payment
plans. It also provides detailed steps on UVM network management, end-to-end steps for creating a
Nutanix cluster, and more.
This document is intended for users responsible for the deployment and configuration of NC2 on AWS.
Readers must be familiar with AWS concepts, such as AWS EC2 instances, AWS networking and security,
AWS storage, and VPN/Direct Connect. Readers must also know other Nutanix products, such as Prism
Element, Prism Central, and NCM Cost Governance (formerly Beam).
Document Organization
The following table shows how this user guide is organized and helps you find the most relevant sections in
the guide for the tasks that you want to perform.
• NC2 on AWS:
NC2 on AWS place the complete Nutanix hyperconverged infrastructure (HCI) stack directly on a bare-
metal instance in Amazon Elastic Compute Cloud (EC2). This bare-metal instance runs a Controller VM
(CVM) and Nutanix AHV as the hypervisor like any on-premises Nutanix deployment, using the AWS
Elastic Network Interface (ENI) to connect to the network. AHV user VMs do not require any additional
configuration to access AWS services or other EC2 instances.
• Runs on the EC2 bare-metal instances. For more information on the supported EC2 bare-metal
instances, see Supported Regions and Bare-metal Instances.
Use Cases
NC2 on AWS is ideally suited for the following key use cases:
• Disaster Recovery on AWS: Configure a Nutanix Cloud Cluster on AWS as your remote backup and
data replication site to quickly recover your business-critical workloads in case of a disaster recovery
(DR) event for your primary data center. Benefit from AWS’ worldwide geographical presence and
elasticity to create an Elastic DR configuration and save DR costs by only expanding your pilot light
cluster when DR need arises.
• Capacity Bursting for Dev/Test: Increase your developer productivity by provisioning additional capacity
for Dev/Test workloads on NC2 on AWS if you may be running out of capacity on on-prem. Utilize a
single management plane to operate and manage your workloads across your data center and NC2 on
AWS environments.
• Modernize Applications with AWS: Significantly accelerate your time to migrate applications to AWS
with a simple lift-and-shift operation—no need to refactor your workloads or rewrite your applications.
Get your on-prem workloads to AWS faster and modernize your applications with direct integrations
with all AWS services.
For more information, see NC2 Use Cases.
NC2 eliminates the complexities in managing networking, using multiple infrastructure tools, and
rearchitecting the applications.
NC2 offers the following key benefits:
• Cluster management:
• One private management subnet for the internal cluster management and communication between
CVM, AHV, and so on.
• One public subnet with an Internet gateway and NAT gateway to provide external connectivity to the
NC2 portal.
• One or more private subnets for UVM traffic, depending on your needs.
Note: All NC2 cluster deployments are single AZ deployments. Therefore, your UVM subnets will be in the
same AZ as the Management subnet. You must not add the Management subnet as a UVM subnet in Prism
Element because UVMs and Management VMs must be on separate subnets.
When you deploy a Nutanix cluster in AWS by using the NC2 console, you can either choose to deploy the
cluster in a new VPC and private subnet, or choose to deploy the cluster in an existing VPC and private
subnet. If you opt to deploy the cluster in a new VPC, during the cluster creation process, the NC2 console
provisions a new VPC and private subnet for management traffic in AWS. You must manually create one or
more separate subnets in AWS for user VMs.
Regardless of your deployment model, there are a few general outbound requirements for deploying
a Nutanix cluster in AWS on top of the existing requirements that on-premises clusters use for support
services. For more information on the endpoints the Nutanix cluster needs to communicate with for a
successful deployment, see Outbound Communication Requirements.
You can isolate your private subnets for UVMs between clusters and use the private Nutanix management
subnets to allow replication traffic between them. All private subnets can share the same routing table. You
must edit the inbound access in each Availability Zone’s security group as shown in the following tables to
allow replication traffic.
If Availability Zone 1 goes down, you can activate protected VMs on the cluster in Availability Zone 2.
Once Availability Zone 1 comes back online, you can redeploy a Nutanix cluster in Availability Zone 1 and
reestablish data protection. New clusters require full replication.
The following table lists the inbound ports you need to establish replication between an on-premises cluster
and a Nutanix cluster running in AWS. You can create these ports on the infrastructure subnet security
group that was automatically created when you deployed NC2 on AWS. The ports must be open in both
directions.
Note: Make sure you set up the cluster virtual IP address for your on-premises and AWS clusters. This IP
address is the destination address for the remote site.
Nutanix has native inbuilt replication capabilities to recover from complete cluster failure. Nutanix
supports asynchronous replication. You can set your Recovery Point Objective (RPO) to be one hour with
asynchronous replications.
The following table lists the optional AWS components that can be used with the NC2 on AWS deployment.
Transit Gateway Yes Charges are also applicable for data traffic.
Network Services
AWS DNS Used by clusters for VMs by default.
You can configure AHV to use own DNS.
You can view all the resources allocated to a cluster running on AWS.
To view the cloud resources created by NC2, perform the following:
1. Sign in to NC2 from the My Nutanix dashboard.
2. In the Clusters page, click the name of the cluster.
3. On the left navigation pane, click Cloud Resources.
The Cloud Resources page displays all the resources associated with the cluster.
NC2 Architecture
The bare-metal instance runs the AHV hypervisor and the hypervisor, like any on-premises deployment,
runs a Controller Virtual Machine (CVM) with direct access to NVMe instance storage hardware.
AOS Storage uses the following three core principles for distributed systems to achieve linear performance
at scale:
This enables our MapReduce Framework (Curator) to use the full power of the cluster to perform
activities concurrently. For example, activities such as data reprotection, compression, erasure coding,
deduplication, and more.
Setting up a cluster with redundancy factor 2 (RF2) protects data against a single rack failure and setting it
up with RF3 protects against a two-rack failure. Also, to protect against multiple correlated failures within a
data center and an entire AZ failure, Nutanix recommends you set up sync replication to a second cluster
in a different AZ in the same Region or an Async replication to an AZ in a different Region. AWS data
transfer charges may apply.
If you want to disable Strict rack Awareness, run the following nCLI command:
ncli cluster disable-strict-domain-awareness
Contact Nutanix Support for assistance if you receive an alert in the Prism Element web console that
indicates your cluster has lost rack awareness.
Note: If your cluster is running in a single AZ without protection either by using disaster recovery to on-prem
or Nutanix Disaster Recovery beyond 30 days, the Nutanix Support portal displays a notification indicating
that your cluster is not protected.
The notification includes a list of all the clusters that are in a single AZ without protection.
Hover over the notification for more details and click Acknowledge. Once you acknowledge the
notification, the notification disappears and appears only if another cluster exceeds 30 days in a
single availability zone without protection.
Nutanix has native inbuilt replication capabilities to recover from complete cluster failure. Nutanix supports
asynchronous replication where you can set your Recovery Point Objective (RPO) to be 1 hour with
asynchronous replication.
Note: Do not use the AWS root user for any deployment or operations related to NC2.
NC2 on AWS does not use AWS Secrets Manager for maintaining any stored secrets. All
customer-sensitive data is stored on customer-managed cluster. Local NVMe storage on the
bare-metal is used for storing customer-sensitive data. Nutanix does not have any visibility
into customer-sensitive data stored locally on the cluster. Any data sent to Nutanix concerning
cluster health is stripped of any Personal Identifiable Information (PII).
Note: Nutanix recommends following the policy of least privilege for all access granted while deploying
NC2. For more information, see NC2 User Management.
For more information about how security is implemented in a Nutanix Cluster environment, see Network
Security using AWS Security Groups.
Data Encryption
To help reduce cost and complexity, Nutanix supports a native local key manager (LKM) for all clusters with
three or more nodes. The LKM runs as a service distributed among all the nodes. You can activate LKM
from Prism Element to enable encryption without adding another silo to manage.
You can activate LKM from Prism Element to enable encryption without adding another silo to manage. If
you are looking to simplify your infrastructure operations, you can also use one-click infrastructure for your
key manager.
Organizations often purchase external key managers (EKMs) separately for both software and hardware.
However, because the Nutanix LKM runs natively in the CVM, it is highly available and there is no variable
add-on pricing based on the number of nodes. Every time you add a node, you know the final cost.
When you upgrade your cluster, the key management services are also upgraded. When upgrading the
infrastructure and management services in lockstep, you are ensuring your security posture and availability
by staying in line with the support matrix.
Nutanix software encryption provides native AES-256 data-at-rest encryption, which can interact with any
KMIP-compliant or TCG-compliant external KMS server (Vormetric, SafeNet, and so on) and the Nutanix
native KMS, introduced in AOS version 5.8. The system uses Intel AES-NI acceleration for encryption and
decryption processes to minimize any potential performance impacts. Nutanix software encryption also
provides in-transit encryption. Note that in-transit encryption is currently applicable within a Nutanix cluster
for data RF.
• IAMFullAccess: Grants the NC2 console access to your AWS account, and grants AWS API access
to the AHV instances
• AWS_ConfigRole: Grants AWS Config permission to get configuration details for supported AWS
resources
• AWSCloudFormationFullAccess: Used to create the initial AWS resources required to link your
AWS account and create a CloudFormation stack
Use the credentials of this IAM user when you are adding your AWS cloud account to the NC2
console. When you are adding your AWS cloud account, you run a CloudFormation template, and the
CloudFormation script adds two IAM roles to your AWS account. One role allows the NC2 console to
access your AWS account by using APIs and the other role is assigned to each of your bare-metal
instances. For more information on IAM roles, see NC2 Security Approach.
See Security Best Practices in IAM for more information on securing your AWS resources.
Note: Before you deploy a cluster, check if the EC2 instance type is supported in the Availability Zone in
which you want to deploy the cluster.
All instance types are not supported in all the availability zones in an AWS region. An error
message is displayed if you try to deploy a cluster with an instance type that is not supported
in the availability zone you selected.
Configure a sufficient vCPU limit for your AWS account. If you do not have the sufficient vCPU limit set
for your AWS account, cluster creation fails.
You can calculate your vCPU limit in the AWS console under EC2 > Limits > Limits Calculator.
Note: Each node in a Nutanix cluster has two EBS volumes attached (AHV EBS and CVM EBS). Both are
encrypted gp3 volumes. The size of AHV EBS volume is 100 GB and CVM EBS is 150 GB.
To learn more about setting AWS vCPU Limits for NC2, see the Nutanix University video.
NC2 on AWS supports accessing the instance metadata from a running instance using one of the following
methods:
Note: NC2 might not support some bare-metal instance types in certain regions due to limitations in the
number of partitions available. NC2 supports EC2 bare-metal instances in regions with three or more
partitions. The support for g4dn.metal instance type is only available on clusters with AOS 6.1.1 and 5.20.4
or later releases.
You can use a combination of i3.metal, i3en.metal, and i4i.metal instance types or
z1d.metal, m5d.metal, and m6id.metal instance types while creating a new cluster or
expanding the cluster capacity of an already running cluster. The combination of these instance
types is subject to bare-metal support from AWS in the region where the cluster is being
deployed. For more details, see Creating a Heterogeneous Cluster.
You can only create homogenous clusters with g4dn.metal instances; it cannot be used to create
a heterogeneous cluster.
The following table lists the AWS EC2 bare-metal instance types supported by Nutanix.
For more information, see Hardware Platform Spec Sheets. Select NC2 on AWS from the Select your
preferred Platform Providers list.
The following table lists the detailed information for each bare-metal instance type supported in each AWS
region.
* - These regions are not auto-enabled by AWS. Ensure you first enable them in your AWS account before
using them with NC2. For more information on how to enable a region, see AWS documentation. Once you
have enabled these regions in your AWS console, ensure they are also selected in your NC2 portal. For
more information, see the instructions about adding cloud regions to the NC2 console in Adding an AWS
Cloud Account.
Note: An instance type may not be supported in a region because the number of partitions is less than the
minimum three partitions required by NC2 or the instance type is not supported by AWS in the specified
region.
Note: You have to manually install the NVIDIA driver on each new node when you expand the cluster size.
Also, NC2 may automatically replace nodes in your cluster if there are issues with node availability. In such a
scenario, the user must also install the NVIDIA driver on the new node procured by NC2.
Note: If a GPU card is present in your cluster, LCM restricts update to AHV if it does not detect a compatible
NVIDIA GRID driver in its inventory. To fetch a compatible NVIDIA GRID driver for your version of AHV, see
Updating the NVIDIA GRID Driver with LCM.
Perform the following steps to install the NVIDIA driver on the G4dn hosts:
1. Download the NVIDIA host driver version 13.0 from the Nutanix portal at https://portal.nutanix.com/
page/downloads?product=ahv&bit=NVIDIA.
2. For detailed installation instructions on NVIDIA driver, see Installing the NVIDIA grid driver.
Note: Users have to sign in to controller VMs in the cluster with the SSH key pair provided during the
cluster creation instead of the default user credentials.
For more information about assigning and configuring a vGPU profile to a VM, see "Creating a
VM (AHV)" in the "Prism Web Console Guide".
Note: NVIDIA vGPU guest OS drivers for product versions 11.0 or later can be acquired using
NVIDIA Licensing Software Downloads under:
• All Available
• Product Family = vGPU
• Platform = Linux KVM
• Platform Version = All Supported
• Product Version = (match host driver version)
AHV-compatible host and guest drivers for older AOS versions can be found on the NVIDIA
Licensing Software Downloads site under 'Platform = Nutanix AHV'.
Limitations
Following are the limitations of NC2 in this release:
• A maximum of 28 nodes are supported in a cluster. NC2 supports 28-node cluster deployment in AWS
regions that have seven placement groups.
Note: NC2 does not recommend using single-node clusters in production environments.
In a cluster running in AWS, you have no visibility into the actual cloud infrastructure such as the
ToR switches. API support is not available to discover the cloud infrastructure components in
Nutanix clusters. Given that the cluster is deployed in a single VPC, the switch view is replaced
by the VPC. Any configuration options on the network switch are disabled for clusters deployed in
AWS.
Uplink Configuration:
The functionality to update the uplink configuration is disabled for a cluster running in AWS.
Hardware Configuration:
The Switch tab in the Hardware menu of the Prism Element web console is disabled for a cluster
running in AWS.
Rack Configuration:
The functionality to configure racks is disabled for a cluster running in AWS. Clusters are deployed
as rack-aware by default. APIs to create racks are also disabled on clusters running in AWS.
Broadcast and LLDP:
AWS does not support broadcast and any link layer information based on protocols such as LLDP.
Host NIC:
Elastic Network Interfaces (ENIs) provisioned on bare-metal AWS instances are virtual interfaces
provided by Nitro cards. AWS does not provide any bandwidth guarantees for each ENI, but
provides an aggregate multi-flow bandwidth of 25G. Also, when clusters are deployed on AWS, ENI
creation and deletion is dynamic based on UVMs and you do not need to perform these workflows.
Hence, the Prism Element web console displays only single host NIC information, that is eth0,
which is the primary ENI of the bare-metal instance. All the configuration and statistical attributes
are associated with eth0.
Cluster Operations
Perform the following actions using the NC2 console:
• Cluster deployment and provisioning must be performed by using the NC2 console and not by using
Foundation.
• Perform add node and remove node operations by using the NC2 console and not by using the Prism
Element web console.
aCLI Operations
The following aCLI commands are disabled in a cluster in AWS:
Namespace Options
net create_cluster_vswitch
delete_cluster_vswitch
get_cluster_vswitch
list_cluster_vswitch
update_cluster_vswitch
host enter_maintenance_mode
enter_maintenance_mode_check
exit_maintenance_mode
nCLI Operations
The following nCLI commands are disabled in a cluster in AWS:
cluster edit-hypervisor-lldp-params
, get-hypervisor-lldp-config
edit-param disable-degraded-state-monitoring
disk delete, remove-start, remove-status
software download, list, remove, upload
API Operations
The following API calls are disabled or changed in a Nutanix cluster running in AWS:
API Changes
POST : /hosts/{hostid}/enter_maintenance_mode Not supported
POST : /hosts/{hostid}/exit_maintenance_mode Not supported
GET /clusters Values for the rack and block configuration are not
displayed.
POST /cluster/block_aware_fixer Not supported
DELETE /api/nutanix/v1/cluster/rackable_units/ Not supported
{uuid}
DELETE /api/nutanix/v3/rackable_units/{uuid} Not supported
DELETE /api/nutanix/v3//disks/{id} Not supported
• IAMFullAccess: Enables the NC2 console to run the CloudFormation template in AWS to link
your AWS and NC2 account.
You use the credentials of this IAM user when you are adding your AWS cloud account to
the NC2 console. When you are adding your AWS cloud account, you run a CloudFormation
template, and the CloudFormation script adds two IAM roles to your AWS account. One role
Note: Only the user account you use to add your AWS account to NC2 has the IAMFullAccess
privilege and the NC2 console itself does not have the IAMFullAccess privilege.
• AWS_ConfigRole: Grants AWS Config permission to get configuration details for supported AWS
resources
• AWSCloudFormationFullAccess: Used to create the initial AWS resources needed to link your
AWS account and create a CloudFormation stack
Note: These permissions are only required for the creation of CloudFormation template and NC2
does not use these for any other purpose.
3. A VPC
4. A private subnet for management traffic
5. One or more private subnets for user VM traffic
6. Two new AWS S3 buckets with Nutanix IAM role if you want to use the Cluster Protect feature to
protect Prism Central, UVM, and volume groups data.
See the AWS documentation for instructions about how to configure these requirements.
2. In the NC2 console:
1. A My Nutanix account to access the NC2 console.
See NC2 Payment Methods on page 69 for more information.
2. An organization
See Creating an Organization on page 39 for more information.
Procedure
1. Go to https://my.nutanix.com.
3. Enter your details, including first name, last name, company name, Job title, phone number,
country, email, and password.
Follow the specified password policy while creating the password. Personal domain email addresses,
such as gmail.com or yahoo.com are not allowed. You must sign up with a company email address.
4. Click Submit.
A confirmation page appears and you receive an email from [email protected] after you
successfully complete the sign-up process.
6. Sign in to the portal using the credentials you specified during the sign-up process.
A default Personal workspace is created after you successfully create a My Nutanix account. You can
rename your workspaces. For more information on workspaces, see Workspace Management.
Note: The default Personal workspace name contains the domain followed by the email address of the
user and the tenant word.
Note: The owner of the My Nutanix workspace that has been used to start the free trial for NC2 must add
other users from the NC2 console with appropriate RBAC if those users need to manage clusters in the
same tenant. For more information on adding users and the roles that can be assigned, see NC2 User
Management.
Note: You are responsible for any hardware and cloud services costs incurred during the NC2 free trial.
Note: Ensure that you select the correct workspace from the Workspace dropdown list on the My
Nutanix dashboard. For more information on workspaces, see Workspace Management.
2. On the My Nutanix dashboard, scroll to Cloud Services, and under Nutanix Cloud Clusters (NC2),
click Get Started.
3. On the Nutanix Cloud Clusters (NC2) on Public Clouds page, under Try NC2, click Start your 30
day free trial.
4. You are redirected to the NC2 console. When prompted to accept the Nutanix Cloud Services Terms of
Service, Click I Accept. The NC2 console opens in a new tab. You can now start using NC2.
Note: If you want to subscribe to NC2 instead of using a free trial, you can click the Select from our
available subscription options to get started option, and then complete the subscription on the
Nutanix Billing Center.
Creating an Organization
An organization in the NC2 console allows you to segregate your clusters based on your specific
requirements. For example, create an organization Finance and then create a cluster in the Finance
organization to run only your finance-related applications.
Procedure
Note: On the My Nutanix dashboard, ensure that you select the correct workspace from the Workspace
dropdown list that shows the workspaces you are part of and that you have used while subscribing to
NC2.
3. In the Create a new organization dialog box, do the following in the indicated fields:
a. Customer. Select the customer account in which you want to create the organization.
b. Organization name. Enter a name for the organization.
c. Organization URL. The URL name is automatically generated. If needed, the name can be
modified.
4. Click Create.
After a successful creation, the new organization will be listed in the Organizations tab.
Updating an Organization
Administrators can update the basic information for your organization from the NC2 console.
Note: Changes applied to the organization entity affect the entirety of the organization and any accounts
listed underneath it.
Procedure
2. In the Organization page, select the ellipsis button of a corresponding organization and click Update.
a. Navigate to the Basic Info tab of the Organization entity's update page.
b. You can edit any of the fields listed below if required:
c. Click Save.
Note: You can add one AWS account to multiple organizations within the same customer entity. However,
you cannot add the same AWS account to two or more different Customer (tenant) entities. If you have
already added an AWS account to an organization and want to add the same AWS account to another
organization, follow the same process, but you do not need to create the CloudFormation template.
If a cluster is present, do not delete the CloudFormation stacks.
Procedure
Note: On the My Nutanix dashboard, ensure that you select the correct workspace from the
Workspace dropdown list that shows the workspaces you are part of and that you have used while
subscribing to NC2.
3. Click the ellipsis next to the organization that you want to add the cloud account to and click Cloud
accounts.
6. In the Name field, type a name for your AWS cloud account.
Note: You can find your Account ID in My Account in the AWS cloud console. Ensure that you enter
the AWS cloud account ID without hyphens.
a. Sign in to the AWS account in which you want to create Nutanix clusters.
This account is the same AWS account that is linked to the Account ID you entered in step 7.
b. In the Quick create stack screen, note the template URL, stack name, and other parameters.
c. Select the I acknowledge that AWS CloudFormation might create IAM resources with
custom names check box.
d. Click Create stack.
e. Monitor the progress of the creation of the stack in the Events tab.
f. Wait until the Status changes to CREATE_COMPLETE.
You can view information about your CloudFormation stack, namely Nutanix-Clusters-High-Nc2-
Cloud-Stack-Prod, on the Stacks page of the CloudFormation console.
» Select All supported regions if you want to create clusters in any of the supported AWS regions.
» Select Specify regions if you want to create clusters in specific AWS regions and select the
regions of your choice from the list of available AWS regions.
Note: Some regions are not auto-enabled by AWS. Ensure you first enable them in your AWS
account before using them with NC2. For more information, see Supported Regions and Bare-
metal Instances.
11. Select the add cloud account disclaimer checkbox for acknowledgment.
Note: A cloud account that has existing NC2 accounts cannot be deactivated. You must terminate all NC2
accounts using the cloud account resources first.
Procedure
1. Navigate to the Customer or Organization dashboard in the NC2 console where the cloud account is
registered.
Procedure
1. Navigate to the Customer or Organization dashboard in the NC2 console where the cloud account is
registered.
2. Click the ellipsis icon against the desired organization or customer and then click Cloud Accounts.
3. Find the cloud account you want to reconnect. Click the ellipsis icon against the cloud account and click
Reconnect.
4. If the underlying issue(s) were addressed and the NC2 console can communicate with the cloud
account infrastructure, the account status will change to R.
Note: Administrators should ensure they have sufficient resource limits in the regions they decide to add
before adding those regions through the NC2 console.
Procedure
1. Navigate to the Customer or Organization dashboard in the NC2 console where the cloud account is
registered.
2. Click the ellipsis icon against the desired organization or customer and then click Cloud Accounts.
3. Find the cloud account where you want to add a new cloud region. Click the ellipsis icon against the
cloud account and click Add regions. A new window appears.
• All supported regions: Select this option if you would like to add all other supported regions
besides those you have already specified.
• Specify regions: Select this option if you would like to add just a few additional supported regions
to your cloud account. Click inside the regions field and select as many regions as you want from the
drop-down menu.
5. Once you have made your selection, click Save. You will receive updates in your notification center
regarding the status.
Procedure
1. Navigate to the Customer or Organization dashboard in the NC2 console, where the cloud account is
registered.
2. Click the ellipsis icon against the desired organization or customer and then click Cloud Accounts.
3. Find the cloud account for which you want to update the configurations. Click the ellipsis icon against
the cloud account and click Update.
• Update Stack: The Update Stack tab provides your CloudFormation Stack template URL and
Stack parameters. These details can be used to update IAM (Identity and Access Management)
roles.
For example, to use new product features, you may need to use the CloudFormation Stack template
URL to expand your IAM permissions after an NC2 product update.
Note: To recreate your CloudFormation stack, you must delete the existing stack in your AWS
Console, which you can access directly from the Recreate Stack sub-tab.
Creating a Cluster
Create a cluster in AWS by using NC2. Your Nutanix cluster runs on an EC2 bare-metal instance in AWS.
For more information on the AWS components that are either installed when the option to create a new
VPC is selected during NC2 on AWS deployment or you need to install manually when you choose to use
an existing VPC, see AWS Components Installed.
Note: Each node in a Nutanix cluster has two EBS volumes attached (AHV EBS and CVM EBS). Both are
encrypted gp3 volumes. The size of AHV EBS volume is 100 GB and CVM EBS is 150 GB.
AWS charges you for EBS volumes regardless of the cluster state (running or hibernate). These
charges are incurred once the cluster is created until it is deleted. See the AWS Pricing Calculator
for information about how AWS bills you for EBS volumes.
AWS bills you an additional charge for the EBS volumes and S3 storage for the time the cluster is
hibernated. If a node turns unhealthy and you add another node to a cluster for evacuation of data
or VMs, AWS also charges you for the new node.
Note: On the My Nutanix dashboard, ensure that you select the correct workspace from the
Workspace dropdown list that shows the workspaces you are part of and that you have used while
subscribing to NC2.
» If you are creating a cluster for the first time, under You have no clusters, click Create Cluster.
» If you have created clusters before, click Create Cluster in the top-right corner of the Clusters
page.
» General Purpose: A cluster that utilizes general purpose Nutanix licenses. For more information
on NCI licensing, see Nutanix Licenses for NC2.
» Virtual Desktop Infrastructure (VDI): A cluster that utilizes Nutanix licenses for virtual desktops.
For more information on NCI and EUC licensing, see Nutanix Licenses for NC2.
4. In the Cloud Provider tab of the Create Cluster dialog box, do the following:
a. Organization. Select the organization in which you want to create the cluster.
b. Cluster Name. Type a name for the cluster.
c. Cloud Provider. Select AWS.
d. Cloud Account. Select the AWS cloud account in which you want to create the cluster.
e. Region. Select the AWS region in which you want to create the cluster.
f. (If you select VDI) Under Consumption Method, the User-based consumption method is
selected by default. In this case, the consumption and cluster pricing are based on the number
Note: The general purpose cluster uses a capacity-based method by default where the
consumption and cluster pricing is based on the capacity provisioned in the cluster.
g. In Advanced Settings, with Scheduled Cluster Termination, NC2 can delete the cluster at a
scheduled time if you are creating a cluster for a limited time or for testing purposes. Select one of
the following:
• Terminate on. Select the date and time when you want the cluster to be deleted.
• Time zone. Select a time zone from the available options.
Note: The cluster will be destroyed, and data will be deleted automatically at the specified time.
This is an irreversible action and data cannot be retrieved once the cluster is terminated.
5. In the Capacity tab, do the following on the Capacity and Redundancy page:
Under Cluster Capacity and Redundancy
• Host type: The instance type used during initial cluster creation is displayed.
• Number of Hosts. Click + or - depending on whether you want to add or remove nodes.
Note: A maximum of 28 nodes are supported in a cluster. NC2 supports 28-node cluster
deployment in AWS regions that have seven placement groups. Also, there must be at least
three nodes in a cluster.
• Add Host Type: The other compatible instance types are displayed depending on the instance
type used for the cluster. For example, if you have used i3.metal node for the cluster, then
i3en.metal, and i4i.metal instance types are displayed.
Note: You can create a heterogeneous cluster using a combination of i3.metal, i3en.metal,
and i4i.metal instance types or z1d.metal, m5d.metal, and m6id.metal instance types.
The Add Host Type option is disabled when no compatible node types are available in
the region where the cluster is being deployed.
• Under Redundancy: Select one of the following redundancy factors (RF) for your cluster.
• RF 1: The number of copies of data replicated across the cluster is 1. The number of nodes for
RF1 must be 1.
Note: RF1 can only be used for single-node clusters. Single-node clusters are not
recommended in production environments. You can configure the cluster with RF1 only for
• RF 2: The number of copies of data replicated across the cluster is 2. The minimum number of
nodes for RF2 must be 3.
• RF 3: The number of copies of data replicated across the cluster is 3. The minimum number of
nodes for RF3 must be 5.
• Host type. Select the type of bare-metal instance that you want your cluster to run on.
• Number of Hosts. Select the number of hosts that you want in your cluster.
Note: A maximum of 28 nodes are supported in a cluster. NC2 supports 28-node cluster
deployment in AWS regions that have seven placement groups.
a. Under Networking, select a Virtual Private Network (VPC) in which you want to create the cluster
from one of the following options:
• Select a VPC from the Virtual Private Network (VPC) drop-down list.
• Select a subnet (from the VPC that you selected in the previous step) from the
Management Subnet drop-down list that you want to use as the management subnet for
your cluster.
Note: This subnet must be a dedicated private subnet for communication between Nutanix
CVMs or management services like Hypervisor.
Note: Ensure that you do not use 192.168.5.0/24 CIDR for the VPC being used to deploy
the NC2 on AWS cluster. All Nutanix nodes use that CIDR for communication between the
CVM and the installed hypervisor.
• Availability Zone. Select the AZ in which you want to create the cluster.
• Use an existing Key Pair. Select an existing SSH key from the drop-down list.
• Create a New Key Pair. Type a Key name and click Generate to generate a new SSH key.
You can use this SSH key to sign in to a node in the cluster, without a password.
c. Under Access Policy, specify the following options to control the user management and UVM's
AWS security groups that are deployed when you create the cluster:
Note: This Public option is only available when you choose to either import a VPC or create
a new VPC in the Network tab.
Allowing Internet access could have security ramifications. Use of a load balancer
is optional and is not a recommended configuration. For securing network traffic
when using a load balancer, you can consider using secure listeners, configuring
security groups, and authenticating users through an identity provider. For more
information, see AWS Documentation.
You can also use a Bastion server (jump box) to gain SSH access to the CVMs
and AHV hosts of Nutanix clusters running on AWS. See Logging into a Cluster by
Using SSH.
• Restricted: If any IP addresses require access to CVMs and AHV hosts, specify a list of
such source IP addresses and ranges. NC2 creates security group rules accordingly.
• Disabled: Disable access to management services in the security group attached to the
cluster nodes.
Note: If you intend to use the Cluster Protect feature, ensure that the Cluster Management
Services can be accessed from the VPC and the Prism Central subnet. Ports 30900 and 30990 are
opened while creating a new NC2 cluster and are required for communication between AOS and
Multicloud Snapshot Technology (MST) to back up the VM and volume groups data.
• NCI (Nutanix Cloud Infrastructure): Select this license type and appropriate add-ons to
use NCI licensing.
Note: You must manually register the cluster to Prism Central and apply the NCI licenses in
Prism Central.
• AOS: Select this license type and appropriate add-ons to reserve and use AOS (legacy)
licenses. For more information on how to reserve AOS (legacy) licenses, see Reserving
License Capacity.
• EUC (End User Computing): Select this option if you want to use EUC licenses for a
specified number of users.
• VDI: Select this option if you want to use VDI licenses for a specified number of users. For
more information on how to reserve VDI licenses, see Reserving License Capacity.
• AOS Version. Select the AOS version that you want to use for the cluster.
Note: The cluster must be running the minimum versions of AOS 6.0.1.7 for NCI and EUC
licenses, and AOS 6.1.1 for NUS license.
• Software Tier. In the Software Tier drop-down list, select the license type based on your
cluster type and the license option you selected.
Note: If you have selected VDI and User-based licensing, then the Ultimate software edition
is automatically selected, as only the VDI Ultimate license tier is supported on NC2.
This option is used for metering and billing purposes. Usage is metered every hour and charged
based on your subscription plan. Any AOS (legacy) and VDI reserved licenses will be picked up
and applied to your NC2 cluster to cover its usage before billing overages to your subscription plan.
c. Under Add-on Products:
• If the NCI (Nutanix Cloud Infrastructure) or EUC (End User Computing) license option is
selected: you can optionally select Use NUS (Nutanix Unified Storage) on this cluster and
specify the storage capacity that you intend to use on this cluster.
Note: You need to manually apply the NCI and the NUS licenses to your cluster.
• If the AOS or VDI license option is selected, you can optionally select the following add-on
products:
• Advanced Replication
• Data-at-Rest Encryption
• Use Files on this cluster: Specify the capacity of files you intend to use in the Unified
Storage Capacity field.
Note: The Advanced Replication and Data-at-Rest Encryption add-ons are selected by default
for AOS and VDI Ultimate; you need to select these add-ons for AOS Pro manually.
• I want to protect the cluster: Select this option if you want to protect the cluster using the Cluster
Protect feature.
Note: You must register this cluster to a new or an existing Prism Central instance that runs in
the same availability zone. If you are going to use this cluster as a source or target for Disaster
Recovery, then you cannot also use the Cluster Protect feature to protect your cluster.
To protect the cluster using the Cluster Protect feature, you must perform the steps listed in Cluster
Protect Configuration.
• I will protect the cluster myself/ I do not need protection: Select this option if you do not want
to use the Cluster Protect feature to protect your cluster.
Note: You can select this option if you need to use this cluster as a source or target for a Disaster
Recovery setup. Nutanix recommends enabling the automatic backup of VM and Volume Groups
data.
Note: The Cluster Protect feature is available only with AOS Ultimate or NCI Ultimate license tier and
needs AOS 6.7 or higher and Prism Central 2023.3 or higher. The Cluster Protect feature is available
only for new cluster deployments. Any clusters created before AOS 6.7 cannot be protected using this
feature.
Note: Nutanix cluster is deployed in AWS in approximately 45 to 60 minutes. If there are any issues
with provisioning the Nutanix cluster, see the Notification Center on the NC2 console.
11. After the cluster is created, click the name of the cluster to view the cluster details.
What to do next
After you create a cluster in AWS, set up the network and security infrastructure in AWS and the cluster for
your user VMs. See User VM Network Management and Security on page 102 for more information.
• Gateway endpoints: These gateways are used for connectivity to Amazon S3 without using an internet
gateway or a NAT device for your VPC. A gateway endpoint targets specific IP routes in the AWS VPC
route table. Gateway endpoints do not use AWS PrivateLink, unlike interface endpoints. There is no
additional charge for using gateway endpoints.
For more information on how to create a new gateway endpoint, see Creating a Gateway Endpoint.
You can create a new or use an existing gateway endpoint. When using an existing gateway endpoint,
you only need to modify the route tables associated with the gateway endpoint. For more information,
see Associating Route Tables With the Gateway Endpoint.
• Interface endpoints: These gateways are used for connectivity to services over AWS PrivateLink.
An interface endpoint is a collection of one or more elastic network interfaces (ENIs) with a private IP
address that serves as an entry point for traffic destined to a supported service. Interface endpoints
allow the use of security groups to restrict access to the endpoint.
For more information, see AWS Documentation.
Note: Ensure that you create your gateway endpoint in the same AWS Region as your S3 buckets. Also,
add the gateway endpoint in the routing table of the resources that need to access S3. The outbound rules
for the security group for instances that access Amazon S3 through the gateway endpoint must allow traffic
to Amazon S3.
Procedure
Note: Ensure that you do not select the Endpoints services option.
6. Under Services, search with the S3 keyword, and then select the service with the name:
com.amazonaws.<region>.s3 and type as Gateway.
7. Under VPCs, select the VPC where you want to create the endpoint.
Note: The VPC must be the same where your cluster is created. All NC2 clusters in that VPC will be
able to access the S3 endpoint. You must create a different endpoint for each VPC where an NC2
cluster is running.
8. Under Route tables, select the route tables corresponding to your NC2 cluster’s private subnet in the
VPC.
This must be the route table associated with the cluster management subnet.
Note: You must add all route tables that are associated with the management subnet of all your
clusters.
11. After successfully creating the endpoint, verify that the route table pointed to the S3 endpoint has the
gateway endpoint in its routes.
Procedure
3. Select the gateway endpoint that you want to use for AWS S3.
7. Under Route tables, select the route tables corresponding to your NC2 cluster’s private subnet in the
VPC.
This must be the route table associated with the cluster management subnet.
Note: You must add all route tables that are associated with the management subnet of all your clusters.
Note: While deploying Prism Central, you need to specify the CIDR of the subnet created for your NC2
cluster. You can find this CIDR from your AWS console listed under IP Address Management > Network
Prefix Length.
For more information about registering your cluster with Prism Central, see Registering Cluster with Prism
Central.
After you deploy Prism Central, perform the following additional networking and security configurations:
Procedure
1. Configure the name servers to host a network service for providing responses to queries against a
directory service, such as a DNS server. For more information, see Configuring Name Servers for Prism
Central.
Note: Ensure that the name server IP address is similar to the one you entered during the deployment of
Prism Central.
2. Configure the NTP servers to synchronize the system clock. For more information, see Configuring NTP
Servers for Prism Central.
You can use:
• 0.pool.ntp.org
• 1.pool.ntp.org
• 2.pool.ntp.org
• 3.pool.ntp.org
3. Add an authentication directory. For more information, see Adding An Authentication Directory (Prism
Central).
4. Configure role permissions. For more information, see Assigning Role Permissions.
5. Configure SSL certificate management. For more information, see Importing an SSL Certificate.
6. Deploy a load balancer to allow Internet access. For more information, see Deploying a Load Balancer
to Allow Internet Access.
8. Enable the inbound access to the Prism Central UI to configure the Site-to-Site VPN setup. For more
information, see Prism Central UI Access for Site-to-Site VPN Setup.
9. Register Prism Central with the Prism Element cluster. For more information, see Registering or
Unregistering Cluster with Prism Central.
What to do next
For more information about how to sign into the Prism Element web console, see Logging into a Cluster by
Using the Prism Element Web Console.
For more information about how to sign into the Prism Central web console, see Logging Into Prism
Central.
Procedure
• Username: admin
• Password: Nutanix/4u
The default password is Nutanix/4u. You are prompted to change the default password if you are
logging on for the first time.
For more information, see Logging Into the Web Console.
What to do next
After you create a cluster in AWS, set up the network and security infrastructure in AWS and the cluster for
your user VMs. See User VM Network Management and Security on page 102 for more information.
Note: When you configure a Linux bastion host, ensure that you do the following:
• Open the EC2 console in the same region as the Nutanix cluster.
• When you are configuring an instance, ensure that you do the following:
• Under Network, change the default VPC to the same VPC being used by the Nutanix
cluster running on AWS.
• Under Subnet, select the subnet containing Nutanix Cluster xxxxxxxxx Public.
• Enable the Auto-assign Public IP option.
• You must restrict access to Management services (access to CVMs and AHV hosts) while
configuring the cluster. To do this, launch the NC2 console, click on the ellipsis for the cluster,
and then click Update Configuration. Select the Access Policy tab, and then select
Restricted under Management Services (Core Nutanix services running on this cluster).
Procedure
Note: You can either upload (secure copy (scp)) the key.pem file from your local machine to the host or
create a new pem file on the host by using the content of the key.pem file via vim key.pem, and then
run the chmod 400 key.pem command.
Replace ip-address-of-the-cvm with the IP address of the CVM (determined in step 5).
What to do next
After you create a cluster in AWS, set up the network and security infrastructure in AWS and the cluster for
your user VMs. See User VM Network Management and Security on page 102 for more information.
Note: You cannot switch back from NCI licensing to AOS licensing.
During the free trial period of NC2, you can only use the AOS licensing option. NC2 enables your cluster
with all AOS features during the free trial. When you switch to a paid NC2 subscription, you have the
choice to continue with the AOS licensing or switch to the NCI licensing.
Nutanix also provides flexible subscription options that help you select a suitable subscription type and
payment method for NC2.
You can use the legacy portfolio licenses and pay using a subscription plan, such as Pay As You Go
(PAYG) or Cloud Commit, for overages above the legacy license capacity used.
For more information on the pricing that is used to charge for overages above legacy AOS license capacity,
see NC2 pricing options.
For the new NCI licensing, NC2 does not charge for overages above the NCI license capacity used. For
more details on the new NCI licenses, see Nutanix Cloud Platform Software Options.
You can choose to be invoiced either directly by Nutanix or through your cloud marketplace account, if you
choose to use your cloud marketplace.
NC2 supports Advanced Replication and Security add-ons for NCI Pro and Nutanix Unified Storage (NUS)
Pro, and you have to manually apply these licenses to Prism Central managing your NC2 cluster. NC2
supports Advanced Replication, Data-at-Rest Encryption, and Files add-ons for AOS (legacy) Pro, and you
have to reserve capacity from these licenses, after which they are automatically picked up and applied to
your NC2 cluster.
The following table lists the combination of license types based on the software configuration and the
subscription plans available for these license types.
Note: Your NC2 cluster is enabled with legacy AOS licenses during the free trial. Once you get a paid
subscription for NC2, you can switch from AOS to NCI licenses. You must deploy Prism Central and
configure your NC2 cluster with that Prism Central in order to use NCI licenses.
Note: You can use the same Prism Central with both AOS and NCI-licensed clusters.
Applying cloud platform licenses, excluding NUS, requires that the cluster is running the minimum versions
of the following software:
• AOS 6.0.1.7
• Nutanix Cluster Check (NCC) 4.3.0
• Prism Central pc.2021.9
Applying NUS licenses requires that the cluster is running the minimum versions of the following software:
• AOS 6.1.1
• NCC 4.5.0
• pc.2022.4
Procedure
1. After the cluster is successfully deployed, register the cluster to a Prism Central instance.
Note: You can register this cluster to an existing Prism Central instance or deploy a new Prism Central
on this cluster.
For more information, see Registering Cluster with Prism Central and Installing a new Prism Central.
2. If you are using a free trial for NC2, you can only select AOS as the option during the free trial period.
If you are ready to switch your cluster type from AOS to NCI, subscribe to NC2 by following the
instructions listed in NC2 Subscription Workflow.
Note: You must perform this step with every NC2 cluster that use the new portfolio licenses, for both
general purpose and VDI clusters.
Perform the following steps to change the license type from AOS to NCI:
4. If you already have the following licenses that you are ready to use, you can manually apply these
licenses by following the procedures described in Applying and Managing Cloud Platform Licenses.
Note: License reservation is required for AOS (legacy) licenses and the associated Advanced Replication
and Data-at-Rest Encryption add-ons. License reservation is not required for NCI licenses and the
associated Advanced Replication and Data-at-Rest Encryption add-ons, as you need to manually apply the
NCI licenses.
You do not need to delete the license reservation when terminating an NC2 cluster if you intend to
use the same license reservation quantity for a cluster you might create in the future.
Procedure
1. Sign in to the Nutanix Support portal at https://portal.nutanix.com and then click the Licenses link on
the portal home page. You are redirected to the Licensing portal.
2. Under Licenses on the left pane, click Active Licenses and then click the Available tab on the All
Active Licenses page.
3. Select the licenses that you want to reserve for NC2 and then select Update reservation for Nutanix
Cloud Clusters (NC2) from the Actions list.
Note: This option becomes available only after you select at least one license for reservation.
5. Enter the number of licenses that you want to reserve in the Reserved for AWS and Reserved for
Azure columns for the license. The available licenses appear in the Total Available to Reserve
column.
Procedure
1. Terminate your cluster from the NC2 console. For more information, see Terminating a Cluster.
2. Update the license reservation for the NC2 cluster under Reserved for AWS or Reserved for Azure
columns as 0 on the Licensing portal. For more information, see Modifying License Reservations.
3. Your license capacity is now available for use with any other Nutanix cluster, including on-prem clusters.
Managing Licenses
Follow these steps to manage licenses and change license type or add add-on products to your running
NC2 cluster.
Procedure
2. In the Clusters page, click the cluster name for which you want to update the add-on product selection.
4. Under Software Configuration, you can change your license tier Pro to Ultimate or vice versa from the
Software Tier list.
5. Under Add-on Products, based on the cluster type (General Purpose or VDI cluster) and the license
tier, the available add-on products are displayed. Select or remove the add-on product based on your
requirements.
6. Click Save.
Note: Ensure that you have reserved enough license capacity for NC2 if you plan to use Nutanix
licenses for NC2 usage. For more information on the license reservation process, see Reserving
License Capacity.
• How would you like to pay for overage above any reserved license capacity?
You have a choice of paying directly to Nutanix or using your cloud marketplace account to pay for NC2
software usage.
Based on your answers to the above questions, you are presented with the appropriate subscription steps.
The following sections describe the most common subscription workflows.
• Nutanix Direct Subscription: Use this workflow when you want to use your Nutanix licenses and a
Nutanix Direct subscription to pay for NC2 on AWS and NC2 on Azure consumption. You pay directly to
Nutanix and have the ability to choose between two plans - Pay As You Go or Cloud Commit.
For more information, see Nutanix Direct Subscription.
• Cloud Marketplace Subscription: Use this workflow when you want to use Nutanix licenses and a Cloud
marketplace subscription.
For more information, see Subscribe to NC2 From AWS Marketplace.
Procedure
• On the My Nutanix dashboard, scroll down to Administration > Billing Center and click Launch.
In the Billing Center, under Nutanix Cloud Clusters, click Subscribe Now.
• On the NC2 console, click the Nutanix billing center link in the banner displayed on the top of the
NC2 console.
You are directed to the Nutanix Billing Center.
• Select Yes, use my reserved license capacity to cover NC2 usage if you want to use Nutanix
licenses for NC2. You must reserve the exact amount of Nutanix licenses capacity from the Nutanix
License portal.
If you select this option, the reserved license capacity is used to cover the NC2 usage first, and
once the licenses are consumed, any overage is charged to the subscription plan you select in the
next step.
You can click Reserve your license capacity from the License Portal now to reserve licenses
for the NC2 usage.
• Select Don’t use licenses. Invoice all NC2 usage to my subscription plan option if you do not
want to use any licenses for NC2. All NC2 usage will be charged to the subscription plan that you
select in the next step.
5. Next, the How would you like to pay for overage above any reserved license capacity? option is
presented.
• Pay directly to Nutanix: The NC2 software usage on all supported clouds (AWS and Azure) is
paid to a single subscription plan.
• Pay via Cloud Marketplace: The cloud marketplace subscription option is only available for NC2
on Azure.
Select Pay directly to Nutanix and then click Next.
7. On the next screen, the payment plan is presented to you based on the choices made in the previous
step.
• Pay As You Go (For NC2 on AWS and Azure): Provides a plan in which you are billed at the end
of each month for the NC2 usage for that month without any term commitments.
• Cloud Commit (For NC2 on AWS and Azure): Provides a payment plan in which you pay upfront
for a year and receive a discount over the PAYG rate. In the Upfront Commitment box, commit
8. On the Company Details page, type the details about your organization and then click Next.
Nutanix Cloud Services considers the address that you provide in the Address 1 and Address 2
fields as the Bill To Address and uses this location to determine your applicable taxes.
If the address where you consume the Nutanix services is different than your Bill To Address, under
the Sold to Address section, clear the Same information as provided above checkbox and then
provide the address of the location where you use the Cloud services. However, only the Bill To
Address is considered to determine your applicable taxes.
9. On the Payment Method page, select one of the following payment methods, and then click Next.
10. On the Review & Confirm page, review all the details, click Edit next to each section if you want to
edit any of the details.
11. (Optional) If you have received a promotional code from Nutanix, type the code in the Promo code
field and click Apply.
What to do next
You can now begin using the NC2.
You can do one of the following:
Note: Any overages above the license capacity purchased through AWS Marketplace will also be billed
through AWS Marketplace, and the same discounted rate used for the initial license purchase through AWS
Marketplace will be used to calculate the billable amount for overages. The overages will be billed and
invoiced monthly by AWS.
You must manually apply NCI and EUC licenses to Prism Central to manage your NC2 clusters.
For more information, see Applying NCI, EUC, and NUS Licenses.
Perform the following steps to subscribe to NC2 from the AWS marketplace:
Procedure
1. Contact your Nutanix Account Manager with your NC2 sizing requirements, such as the number of
licenses required and the term for usage.
Your Nutanix Account Manager works with a Nutanix reseller, if applicable, to create customized
pricing and convert that into a private offer in AWS Marketplace. Once the offer is ready for you to
accept through AWS Marketplace, you will receive an email from the Nutanix reseller with the private
offer details, including the pricing that is specific to you.
Note: You need to provide your AWS billing account details to the Nutanix Account Manager. You can
find your billing account ID in the AWS Management Console.
2. Sign in to the AWS Marketplace console and click the Private Offer URL in the email you receive from
the Nutanix reseller.
Alternatively, in the AWS Marketplace console, navigate to the Private offers page > Available
offers > select the Offer ID for the offer of interest, and click View offer.
You are redirected to the Nutanix Cloud Clusters (NC2) listing page, where you need to configure
your software contract.
4. Under How long do you want your contract to run?, review the tenure of your contract.
5. Under Dates, review the Service start date, Service end date, and Offer expiration date. You
must accept the offer before the offer expiration date.
Note: You can contact your Nutanix reseller when you need to renew the contract.
7. Under Contract Options, review the number of units you want to purchase for the required Nutanix
licenses.
8. Under Additional Usage Fees, review the pay-as-you-go monthly charges for additional usage.
You will be charged this rate for any NC2 usage on AWS above the license capacity you purchase.
12. After successful payment, click Set up your account to set up your billing subscription with NC2.
13. You are redirected to the Nutanix Billing Center to complete your NC2 Billing configuration.
Note: If you do not already have an existing My Nutanix account, you must sign up for a new My
Nutanix account and verify the email address used to sign up for My Nutanix. After verifying your email
address, you will be automatically redirected to My Nutanix Billing Center. For more information, see
Creating My Nutanix Account.
15. Select the correct workspace from the Workspace list on the My Nutanix dashboard.
The workspace should be the same workspace you used when creating NC2 clusters. For more
information on workspaces, see Workspace Management.
16. Click Add Addresses to add your billing address and the address where the NC2 subscription will be
used.
• Switch from the Pay directly to Nutanix - Pay As You Go payment plan to the Pay directly to Nutanix
- Cloud Commit payment plan.
Note: You cannot change the Cloud Commit plan to Pay As You Go plan.
Procedure
3. On the My Nutanix dashboard, go to Administration > Billing Center and click Launch.
6. In the Cancel Plan dialog, click Yes, Cancel to cancel the subscription plan or click Nevermind to
close the Cancel Plan dialog.
7. In the Share Your Feedback dialog, you can specify your reasons to cancel the plan, and click Send.
What to do next
Your plan is deactivated at the end of the current billing schedule. The Cancel Plan dialog displays the
date on which your plan is scheduled to be deactivated.
Note: You can revoke the cancellation of your plan at the most two times before the plan is deactivated.
Note: Only the primary billing contact can modify any billing or subscription details.
• If you have applied the Nutanix software licenses, you can change the licenses allocated to NC2.
• View details about the unbilled amount for the current month.
• View details of usage, such as rate, quantity, and the amount charged for each entity (CPU hours,
public IP address hours, disk size, and memory hours) for each cluster.
For more information on how to manage billing, see Nutanix Cloud Services Administration Guide.
• Details about the rate, quantity, and amount charged per unit for a selected billing cycle. You can check
the details for the current and last two billing cycles.
• Details about the usage of clusters by units of measure for a selected billing cycle.
Perform the following procedure to display the billing and usage details of NC2:
1. Sign in to your My Nutanix account.
• Spend: Displays a graph detailing your estimated daily spending for a selected billing cycle. You
can check details for the current and last two billing cycles. You can apply filters to the graph for
individual units of measure. A summary table with detailed information about the current billing cycle
is also displayed.
• Usage: Displays an estimate of your total usage for the billing cycle that you select. You can filter the
usage by clusters and units of measure. Individual units of measure are a breakdown of total usage
on the latest day of the billing cycle that you select. You can apply filters to see more details, such
as usage information of each cluster and find out whether a usage is processed through licensing or
subscription.
Select the billing period on the top-right corner of the usage graph to see the total usage for the
selected billing cycle in the form of a graph.
Under Usage broken down by individual units of measure, click Clusters, and then select
a cluster ID and choose a unit of measure to see the total usage of each cluster for a selected
billing cycle in a graphical view. Hover over the bars in the graph to see the number of licenses and
subscriptions you used.
Click Units and select a unit of measure to see the total usage of all the clusters by that unit of
measure.
A breakdown of the total usage of the same billing cycle you selected is displayed in a table after the
graph. You can view the usage graph for three billing cycles.
• Name: Enter a unique name for your API key to help you identify the key.
• Scope: Select the Usage Analytics scope category under Billing from the Scope drop-down
list.
e. Click Create. The Created API dialog is displayed.
Note: You cannot recover the generated API key and key ID after you close this dialog.
For more details on API Key management, see the API Key Management section in the Licensing
Guide.
Note: This step uses Python to generate a JWT token. You can use other programming languages, such
as Javascript and Golang.
b. Replace the API Key and Key ID in the following Python script and then run it to generate a JWT
token. Also, you can specify expiry time in seconds for the JWT token to remain valid. In the
requesterip attribute, enter the requester IP.
from datetime import datetime
from datetime import timedelta
import base64
import hmac
import hashlib
import jwt
def generate_jwt():
curr_time = datetime.utcnow()
payload = {
"aud": aud_url,
"iat": curr_time,
"exp": curr_time + timedelta(seconds=120),
"iss": key_id,
"metadata": {
"reason": "fetch usages",
"requesterip": "enter the requester IP",
"date-time": curr_time.strftime("%m/%d/%Y, %H:%M:%S"),
"user-agent": "datamart"
}
}
signature = base64.b64encode(hmac.new(bytes(api_key, 'UTF-8'), bytes(key_id,
'UTF-8'), digestmod=hashlib.sha512).digest())
token = jwt.encode(payload, signature, algorithm='HS512',
headers={"kid": key_id})
print("Token (Validate): {}" .format(token))
generate_jwt()
c. A JWT token is generated. Copy the JWT token on your system for further use. The JWT token can
be used as an Authorization header when validating the API call. The JWT token remains valid for
the duration that you have specified.
• You create UVM networks by specifying a CIDR value that matches the CIDR value of the AWS subnet.
• NC2 supports only AHV managed networks.
• UVMs use only the DHCP servers provided by the cluster.
• You do not need to specify the VLAN ID when you are creating a network.
• AWS Gateway is used as the default gateway for the UVM networks and cannot be changed.
Nutanix clusters consume the AWS subnets from Prism Element. You must add the AWS subnets you
created for UVMs as networks by using the Prism Element web console. Before you create networks for
UVMS in Prism Element, create AWS subnets manually either by using the AWS console, AWS Cloud
Formation template, or any other tools of your choice.
Nutanix recommends the following:
Note: In NC2 on AWS with AOS 6.6.x, while creating a subnet from the Settings > Network Configuration
> Subnets tab, the list of (AWS) Cloud Subnets does not appear. As a workaround, you can add the Cloud
Subnets using Network Prefix Length and Gateway IP Address based on Cloud Subnet CIDR.
Procedure
2. You can navigate to the Create Subnet dialog box in any of the following way:
• Network Prefix Length: Associated VPC CIDR of the cloud subnet that you have selected.
• Gateway IP Address: Gateway IP address of the cloud subnet that you have selected.
Note: IP Address Management is enabled by default and indicates that the network is an AHV
managed network. AHV networking stack manages the IP addressing of the UVMs in the network.
• Network IP Prefix: The associated VPC CIDR of the cloud subnet that you have selected is
populated.
• Start Address: Enter the starting IP address of the range.
• End Address: Enter the ending IP address of the range.
• Click Submit to close the window and return to the Create Subnet dialog box.
f. Under DHCP Settings, provide the following details:
• DHCP Settings: Select this checkbox to define a domain. When this checkbox is selected, the
fields to specify DNS servers and domains are displayed. Clearing this checkbox hides those
fields.
• Domain Name Servers (comma separated): Enter a comma-delimited list of DNS servers. If
you leave this field blank, the cluster uses the IP address of the AWS VPC DNS server.
• Domain Search (comma separated): Enter a comma-delimited list of domains.
• Domain Name: Enter the domain name.
• TFTP Server Name: Enter the hostname or IP address of the TFTP server from which virtual
machines can download a boot file. It is required in a Pre-boot execution Environment (PXE).
• Boot File Name: Enter the name of the boot file to download from the TFTP server.
4. Click Save to configure the network connection, close the Create Subnet dialog box.
Procedure
2. Click the entities menu in the main menu, expand Network & Security, and then select Subnets. The
Subnets window appears.
Note: Ensure that you do not use the Create Subnet option displayed adjacent to the Network Config
option on the Subnets window.
4. On the Create Subnet dialog box, provide the required details in the indicated fields:
• IP Address Management: When you select the cloud subnet, the following details are populated
under IP Address Management:
• Network Prefix Length: Associated VPC CIDR of the cloud subnet that you have selected. This
maps to the CIDR block on the Cloud subnet.
• Gateway IP Address: Gateway IP address of the cloud subnet that you have selected.
Note:
IP Address Management is enabled by default and indicates that the network is an AHV
managed network. AHV networking stack manages the IP addressing of the UVMs in the
network.
• Network IP Prefix: The associated VPC CIDR of the cloud subnet that you have selected is
populated.
• Start Address: Enter the starting IP address of the range.
• End Address: Enter the ending IP address of the range.
• Click Submit to close the window and return to the Create Subnet dialog box.
• DHCP Settings: Select the DHCP Settings checkbox to define a domain. Select this checkbox to
define a domain. When this checkbox is selected, the fields to specify DNS servers and domains are
displayed. Provide the following details:
• Domain Name Servers (comma separated): Enter a comma-delimited list of DNS servers. If
you leave this field blank, the cluster uses the IP address of the AWS VPC DNS server.
• Domain Search (comma separated): Enter a comma-delimited list of domains.
• Domain Name: Enter the domain name.
• TFTP Server Name: Enter the hostname or IP address of the TFTP server from which virtual
machines can download a boot file. It is required in a Pre-boot eXecution Environment (PXE).
• Boot File Name: Enter the name of the boot file to download from the TFTP server.
5. Click Save to configure the network connection, close the Create Subnet dialog box.
Procedure
• Click the gear icon in the main menu and select Network Configuration in the Settings page. The
Network Configuration window appears.
• Go to the VMs dashboard and click the Network Config button.
3. On the Network Configuration window, select the UVM network you want to update and click the
pencil icon on the right.
The Update Network dialog box appears, which contains the same fields as the Create Network
dialog box (see Creating a UVM Network using Prism Element on page 103).
5. Click Save to update the network configuration and return to the Network Configuration window.
6. To delete a UVM network, in the Network Configuration window, select the UVM network you want to
delete and click the X icon (on the right).
A window prompt appears to verify the action; click OK. The network is removed from the list.
Note: This operation does not delete the AWS subnet associated with the UVM network.
Procedure
2. Click the entities menu in the main menu, expand Network & Security, and then select Subnets. The
Subnets window appears.
3. On the Subnets window, click Network Config. The Network Configuration dialog box appears. On
the Network Configuration window, select the UVM network you want to update and click the pencil
icon on the right.
The Update Subnet dialog box appears, which contains the same fields as the Create Subnet dialog
box. See Creating a UVM Network using Prism Central on page 105.
6. To delete a UVM network, in the Network Configuration window, select the UVM network you want to
delete and click the X icon on the right.
A window prompt appears to verify the action; click OK. The network is removed from the list.
Note: This operation does not delete the AWS subnet associated with the UVM network.
See the Command Reference guide for detailed information about how to block an IP address on a
managed network.
The cluster does not use the IP addresses blocked by using AHV IPAM for any UVM vNIC assignments.
Note: ENIs can have upto 49 secondary IP addresses and NC2 implements sharing of ENIs for vNIC IP
addresses until the ENI IP address capacity is reached.
Bare-metal instances support up to 15 ENIs. One ENI is dedicated for AHV or CVM connectivity and the
rest of the 14 ENIs are dynamically created as UVMs are powered on or migrated to the AHV node. Note
that an ENI belongs to a single AWS subnet and so UVMs from more than 14 subnets on a given AHV
node is not supported.
To learn more about the number of AWS ENIs on bare metal instances, see AWS Documentation.
Procedure
3. In the Create VM dialog box, scroll down to Network Adaptors (NIC) and click Add New NIC.
4. In the Network Name drop-down list, select the UVM network to which you want to add the vNIC.
5. Select (click the radio button for) Connected or Disconnected to connect or disconnect the vNIC to
the network.
6. The Network Address / Prefix is a read-only field that displays the IP address and prefix of the
network.
7. In the IP address field, enter an IP address for the NIC if you manually want to assign an IP address to
the vNIC.
This is an optional field. Clusters in AWS support only managed networks. Therefore, an IP address to
the vNIC is automatically assigned if you leave this field blank.
Note: See the AWS documentation for instructions about how to perform these tasks.
Procedure
2. Create a NAT gateway, associate the gateway with the public subnet, assign a public elastic IP address
to the NAT gateway.
3. Create a route table and add a route to that route table with the target as the NAT gateway (created in
step 2).
4. Add the route table you created in step 3 with the private subnet you have created for UVMs.
6. Create a UVM network as described in Creating a UVM Network using Prism Element on page 103.
7. Go to the UVM in the Prism Element web console and add a vNIC to the UVM by using the AWS private
subnet as described in Adding a Virtual Network Interface (vNIC) to a User VM on page 110.
Your UVM can now access the internet.
Note: Additional AWS charges might apply for the use of a network load balancer. Check with your AWS
representative before you create a network load balancer.
Procedure
Note: Make sure the port you want to access is open in an inbound policy of the security group
associated with bare-metal instances of the cluster.
Note: Additional AWS charges might apply if you use the network load balancer. Check with your AWS
representative before you create a network load balancer.
Perform the following procedure to set up the network load balancer in AWS.
Procedure
Note: If you choose a private subnet in the VPC, the Prism Element or Prism Central cannot be
accessed from the Internet.
Note: Make sure the port you want to access is open in an inbound policy of the security group
associated with bare-metal instances of the cluster.
The IP address you choose for the target group must be one of the CVM IP addresses
that you can see on the NC2 portal.
Note: Nutanix recommends you manually blacklist the virtual IP address configured on Prism
Central to avoid IP address conflicts.
What to do next
Note down the DNS name of the load balancer. To find the DNS name, open the load balancer on your
AWS console and then navigate to Description > Basic Configuration. Then to get the IP address of
the load balancer, navigate to Network & Security > Network Interfaces > search the name of the load
balancer and then copy the Primary Private IPv4 address. You would need the load balancer IP address
while modifying the inbound rules under the UVM security group.
Procedure
2. Filter and select the cluster node on which the Prism Central is deployed, and then click the Security
tab.
4. For the selected UVM security group, in the Inbound rules tab, click Add rule, and then enter the TCP
port as 9440 and the custom source IP as the load balancer IP.
Note: These default security groups are created for each cluster. Amending security group rules in one
cluster does not affect the security group rules in another cluster. When you amend inbound and outbound
rules within the default UVM security group, the policies are applied to all UVMs that are part of the cluster.
You can also create custom security groups to more granularly control traffic to your NC2 environment. You
can:
• create a security group that applies to the entire VPC, if you want the same security group rules applied
to all clusters in that VPC.
• create a security group that applies to a specific cluster if you want certain security group rules applied
only to that particular cluster.
• create a security group that applies to a subset of UVMs in a specific cluster, if you want certain security
group rules to only apply those subsets of UVMs.
You must configure Prism Central VM security group and all the UVM security groups in a way that allows
communication between Prism Central VM and UVMs. In a single cluster deployment, the Prism Central
VM and UVM communication is open by default. However, if your Prism Central is hosted on a different
NC2 cluster, then you must allow communication between the Prism Central VM on the cluster hosting
Prism Central and the management subnets of the remaining NC2 clusters.
You do not need to configure security groups for communication between the CVM of the cluster hosting
Prism Central and Prism Central VM.
You cannot deploy Prism Central in the Management subnet. You must deploy Prism Central in a separate
subnet.
Suppose your Prism Central is hosted on a different NC2 cluster (say, NC2-Cluster2). In that case, you
must modify the security groups associated with the management subnet on NC2-Cluster1 to include
inbound and outbound security group rules for communication between the Prism Central subnet on NC2-
Cluster2 and Management subnet on NC2-Cluster1. This might extend to management subnets across
multiple clusters managed by the same Prism Central.
Note: Ensure that all AWS subnets used for NC2, except the Management subnet, use the same route
table. For more information on AWS route tables, see AWS documentation.
For more information on the ports and endpoints the NC2 cluster needs, see Ports and Endpoint
Requirements.
For more details on the default Internal management, User management, and UVM security groups, see
Default Security Groups. For more information on creating custom security groups, see Custom Security
Groups.
Cloud Clusters (NC2) | Network Security using AWS Security Groups | 116
Perform the following steps to control inbound and outbound traffic:
1. Determine if you want to use the default UVM security group to control inbound and outbound traffic
for all UVMs in the cluster or if you want more granular control over UVM security rules with different
security groups for different UVMs.
2. Edit the default UVM security group to add inbound and outbound rules if you want those rules to apply
to all UVMs on your cluster.
3. You may also create additional custom security groups for more granular control of traffic flow in your
NC2 environment:
1. Create a security group in AWS.
2. Add appropriate tags to the security group. For more details on the tags needed with custom security
groups, see Custom Security Groups.
3. Add rules to enable or restrict inbound and outbound traffic.
The Internal management, User management, and UVM security groups have the recommended
default rules set up by NC2 at cluster creation. All management ENIs created, even after initial cluster
Cloud Clusters (NC2) | Network Security using AWS Security Groups | 117
deployment, have the default Internal management (internal_management) and User management
(user_management) security groups attached.
Note: Nutanix recommends that you do not modify Internal management and User management security
groups or change any security group attachments.
All elastic network interfaces (ENIs) for CVMs and the EC2 bare-metal hosts are present on the private
Management subnet.
All UVMs on a cluster are associated with the default UVM security group unless you create additional
UVM security groups. The default UVM security group controls all traffic that enters the ENIs belonging to
the UVM subnets. Additional custom security groups can be created to control traffic at the VPC, individual
cluster, or UVM subnet levels.
To allow communication from external sources to the UVMs, you must modify the default UVM security
group to add new inbound rules for the source IP addresses and the load balancer IP addresses.
Note: Each cluster in the same VPC has its default security group. When you amend inbound and outbound
rules within the default UVM security group, the policies are applied to all UVMs that are part of the cluster.
Note: NC2 supports the ability to create custom security groups when it uses AOS 6.7 or higher.
A custom security group at the VPC level is attached to all ENIs in the VPC. A custom security group at
the cluster level is attached to all ENIs of the cluster. Custom security groups at the UVM subnet level are
attached to all ENIs of all specified UVM subnets.
You can use custom security groups to apply security group rules across all clusters in a VPC or a specific
cluster or a subset of UVM Subnets in a specific cluster. A custom security group per UVM subnet can
be beneficial when controlling traffic for specific UVMs or restricting traffic between UVMs from different
subnets. To support custom security groups at the UVM subnet level, NC2 assigns tags with key-value
pairs that can be used to identify the custom security groups. For more information about default security
groups for internal management and UVMs, see Default Security Groups.
Note: To be able to increase the custom security groups quota beyond the default limit, you must add the
GetServiceQuota permission to the Nutanix-Clusters-High-Nc2-Cloud-Stack-Prod IAM role. To change
the permissions and policies attached to the IAM role, sign into the AWS Management Console, open the
IAM console at https://console.aws.amazon.com/iam/, and choose Roles > Permissions. For more
information, see AWS documentation.
Cloud Clusters (NC2) | Network Security using AWS Security Groups | 118
Figure 69: GetServiceQuota permission
The default AWS service quota allows you to create a maximum of five custom security groups per ENI.
Out of the five security groups per ENI quota, one is used for the default UVM security group. You can add
only one custom security group at the VPC level and one custom security group at the cluster level. You
can add the remaining custom security groups at the UVM subnet level.
For example, if you create one custom security group at the VPC level and one at the cluster level, you can
create two security groups at the UVM subnet level, assuming you have the default AWS Service quota
limit of 5 security groups per ENI. Similarly, if you create one security group at the cluster level and no
security group at the VPC level, you can create three security groups at the UVM subnet level.
Note: If you need more security groups, you can contact AWS support to increase the number of security
groups per ENI in your VPC.
The following table lists the AWS tags for custom security groups and the level at which these security
groups can be applied. These three tags have hierarchical order that defines the order in which the
security groups with these tags are honoured. A higher hierarchical tag is a prerequisite for the lower
hierarchical tag, and therefore the higher hierarchical tag must be present in the security group with the
lower hierarchical tag. For example, if you use the networks tag (the lowest hierarchical tag) for a security
group, both the cluster-uuid (middle hierarchical) tag and external (higher hierarchical) tag must also be
present in that security group. Similarly, if you add the cluster-uuid tag, the external tag must be present
in that security group.
For example, if you want to create a security group to apply rules to all clusters in a certain VPC, you must
attach the following tag to the security group. The tag value can be left blank:
Cloud Clusters (NC2) | Network Security using AWS Security Groups | 119
Table 13: Tag example for VPC-level security group
The following figure shows an example of tags applied for the custom security group at the VPC level.
If you want to create a security group to apply rules to a cluster with UUID 1234, then you must apply both
of these tags to the security group:
The following figure shows an example of tags applied for the custom security group at the cluster level.
Cloud Clusters (NC2) | Network Security using AWS Security Groups | 120
Figure 71: Example for cluster-level security group
If you want to create a security group to apply rules to a UVM subnet 10.70.0.0/24 in a cluster with UUID
1234, then you must apply all three of these tags to the security group:
The following figure shows an example of tags applied for the custom security group at the subnet level.
Cloud Clusters (NC2) | Network Security using AWS Security Groups | 121
Ports and Endpoints Requirements
This section lists the ports and endpoint requirements for the following:
• Outbound communication
• Inbound Communication
• Communication to UVMs
For more information on the general firewall support requirements, see the Port and Protocols guide.
Note: Many of the destinations listed here use DNS failover and load balancing. For this reason, the IP
address returned when resolving a specific domain may change rapidly. Nutanix cannot provide specific IP
addresses in place of domain names.
Cloud Clusters (NC2) | Network Security using AWS Security Groups | 122
Table 17: Cluster Outbound to EC2
Cloud Clusters (NC2) | Network Security using AWS Security Groups | 123
Figure 73: Inbound Rules in User Management Security Group
Cloud Clusters (NC2) | Network Security using AWS Security Groups | 124
Description Protocol Number Source: User
Management Security
Group
Prism Central to Prism TCP 9300 and 9301 default: allow
Element communication
Note: You must
manually open
these ports in
the default UVM
security group.
Cloud Clusters (NC2) | Network Security using AWS Security Groups | 125
CLUSTER MANAGEMENT
Modify, update, manually replace, display AWS events, hibernate, resume and delete NC2 running on AWS
by using NC2 console.
• i3.metal, i3en.metal, i4i.metal: Any combination of these instance types can be mixed, subject to
the bare-metal availability in the region where the cluster is being deployed.
• z1d.metal, m5d.metal, m6id.metal: Any combination of these instance types can be mixed, subject
to the bare-metal availability in the region where the cluster is being deployed.
For more details, see Creating a Heterogeneous Cluster.
Note: The tasks to add or remove nodes are executed sequentially while updating the capacity of a cluster.
Note: You must update the cluster capacity by using the NC2 console only. Support to update the cluster
capacity by using the Prism Element web console is not available.
When expanding an NCI cluster beyond what the NCI license covers, you need to purchase
and manually apply additional license capacity. Contact your Nutanix account representative to
purchase an additional license capacity.
Procedure
2. In the Clusters page, click the name of the cluster for which you want to update the capacity.
• Host type. The instance type used during initial cluster creation is displayed.
• Number of Hosts. Click + or - depending on whether you want to add or remove nodes from the
cluster.
Note: A maximum of 28 nodes are supported in a cluster. NC2 supports 28-node cluster deployment
in AWS regions that have seven placement groups. Also, there must be at least three nodes in a
cluster for RF2 and five nodes when RF3.
Nutanix recommends that the number of hosts should match the RF number or multiples of
the RF number that has been selected for the base cluster.
• Add Host Type: Depending on the instance type used for the cluster, the other compatible instance
types are displayed. For example, if you have used i3.metal node for the cluster, then i3en.metal,
and i4i.metal instance types are displayed.
Note: You can create a heterogeneous cluster using a combination of i3.metal, i3en.metal, and
i4i.metal instance types or z1d.metal, m5d.metal, and m6id.metal instance types.
The Add Host Type option is disabled when no compatible node types are available in the region
where the cluster is deployed.
Note: UVMs that have been created and powered ON in the original cluster running a specific node
or a combination of compatible nodes, as listed below, cannot be live migrated across different node
• If z1d.metal is present in the heterogeneous cluster either as the initial node type of the
cluster or as the new node type added to an existing cluster.
• If i4i.metal is the initial node type of the cluster and any other compatible node is added.
• If m6id.metal is the initial node type of the cluster and any other compatible node is added.
• If i3en.metal is the initial node type of the cluster and the i3.metal node is added.
• RF 1. Data is not replicated across the cluster for RF1. The minimum cluster size must be 1.
• RF 2. The number of copies of data replicated across the cluster is 2. The minimum cluster size
must be 3.
• RF 3. Number of copies of data replicated across the cluster is 3. The minimum cluster size must be
5.
6. Under Service Quotas, the service quotas for AWS resources under your AWS quota are displayed.
Click Check quotas to verify the cluster creation or expansion limits.
7. Click Save. The Increase capacity? or Reduce capacity? dialog appears based on your choice to
expand or shrink the cluster capacity in the previous steps.
8. Click Yes, Increase Capacity or Yes, Reduce Capacity to confirm your action.
Note: The cluster expansion to the target capacity might fail if enough AWS nodes are unavailable in the
current region. The NC2 console will automatically retry to provision the nodes. If the error in provisioning
the nodes is consistent, you should check with your AWS account representative to ensure enough
nodes are available from AWS in your target AWS region and Availability Zone.
Ensure that all VMs on the nodes you want to remove must be turned off before performing the
node removal task.
You can cancel any pending operations to expand the cluster capacity and try to expand the
cluster capacity with a different instance type. See Creating a Heterogeneous Cluster for more
details.
What to do next
For more information when you see an alert in the Alerts dashboard of the Prism Element web console or
if the Data Resiliency Status dashboard displays a Critical status, see Maintaining Availability: Node and
Rack Failure.
Note: If a host turns unhealthy and you add another host to a cluster for evacuation of data or VMs, AWS
charges you additionally for the new host.
Procedure
3. In the Hosts page, click the ellipsis of the corresponding host you want to replace, and click Replace
Host.
4. In the Replace Host dialog box, specify why you want to replace the host and click Confirm.
What to do next
For more information when you see an alert in the Alerts dashboard of the Prism Element web console or
if the Data Resiliency Status dashboard displays a Critical status, see Maintaining Availability: Node and
Rack Failure.
• NC2 on AWS supports a combination of i3.metal, i3en.metal, and i4i.metal instance types or
z1d.metal, m5d.metal, and m6id.metal instance types. The AWS region must have these instance
types supported by NC2 on AWS. For more information, see Supported Regions and Bare-metal
Instances.
Note: You can only create homogenous clusters with g4dn.metal; it cannot be used to create a
heterogeneous cluster.
• Nutanix recommends that the minimum number of additional nodes should be equal to or greater
than your cluster's redundancy factors (RF), and the cluster should be expanded in multiples of RF for
the additional nodes. A warning is displayed if the number of nodes is not evenly divisible by the RF
number.
• UVMs that have been created and powered ON in the original cluster running a specific node or a
combination of compatible nodes, as listed below, cannot be live migrated across different node types
when other nodes are added to the cluster. After successful cluster expansion, all UVMs must be
powered OFF and powered ON to enable live migration.
• If z1d.metal is present in the heterogeneous cluster either as the initial node type of the cluster or
as the new node type added to an existing cluster.
• If i4i.metal is the initial node type of the cluster and any other compatible node is added.
• If m6id.metal is the initial node type of the cluster and any other compatible node is added.
• If i3en.metal is the initial node type of the cluster and the i3.metal node is added.
• You can expand or shrink the cluster with any number of i3.metal, i3en.metal, and i4i.metal
instance types or z1d.metal, m5d.metal, and m6id.metal instance types as long as the cluster size
remains within the cap of a maximum of 28 nodes.
Note: You must update the cluster capacity using the NC2 console. You cannot update the cluster capacity
using the Prism Element web console.
For more information on how to add two different node types when expanding a cluster, see Updating the
Cluster Capacity.
The default encryption, Amazon S3 managed keys (SSE-S3), is enabled for the S3 bucket.
You can use a gateway endpoint for connectivity to Amazon S3 without using an internet gateway or a NAT
device for your VPC. For more information, see AWS VPC Endpoints for S3.
Hibernate and resume feature is generally available with the AOS 6.5.1 version. All previously hibernated
clusters running AOS 6.0.1 or prior versions must be resumed once and then upgraded to AOS 6.5.1 or
later versions to hibernate the cluster again in future. Henceforth, if you want to use the GA version of this
feature, upgrade to AOS 6.5.1 version.
After you hibernate your cluster, you will not be billed for any Nutanix software usage or the AWS bare-
metal instance for the duration the cluster is in the hibernated state. However, you may be charged
by AWS for the data stored in Amazon S3 buckets for the duration the cluster is hibernated. For more
information about the Amazon S3 billing, see the AWS documentation.
NC2 does not consume any of your reserved license capacities while a cloud cluster is in the hibernated
state. Once a cloud cluster is resumed, an appropriate license will be automatically applied to the cluster
from your reserved license pool, provided that enough reserved capacity is available to cover your cluster
capacity. To learn more about license reservations for cloud clusters, visit Reserving License Capacity on
page 74.
You can hibernate and resume a single-node and three or more node clusters.
Note: You cannot hibernate the clusters that are protected by the Cluster Protect feature. You must stop
protecting the cluster before triggering hibernation.
You cannot hibernate a cluster if any of the following conditions are met:
For more architectural details on the hibernate/resume operation, visit the Tech Note for NC2 on AWS.
Note: The encryption will be enabled by default on all S3 buckets used for hibernation.
Procedure
2. Click on the cluster that you want to hibernate. The cluster summary page will open.
4. In the Hibernate cluster "Cluster Name" dialog box, review the hibernation guidelines and limitations,
and then type the name of the cluster in the text box.
Note: Your data is retained in S3 buckets for 6 days post a successful resume operation.
When a hibernated cluster is resumed, it returns to the same licensing state it had before entering
hibernation. The IP addresses of hosts and CVMs remain the same as pre-hibernate.
Procedure
2. Select the hibernated cluster that you want to resume, and click Resume Cluster.
• Do not attempt failover, failback, VM restore or create new DR configurations during hibernate or
resume. Any such running operations might fail if you start hibernating a cluster.
• Disable SyncRep schedules from Prism Central for a cluster that is used as a source or target for
SyncRep before hibernating that cluster. Failure to do so might result in data loss.
• Ensure that no ongoing synchronous or asynchronous replications are happening when you initiate the
cluster hibernation.
• Disable existing near-sync/minutely snapshots and do not configure new minutely snapshots during
the hibernate or resume operation. You may have to wait until the data of the minutely snapshots
gets garbage collected before trying to hibernate again. The waiting period could be approximately 70
minutes.
• Remove remote schedules of protection policies and suspend remote schedules of protection domains
targeting a cluster until the cluster is hibernated.
Terminating a Cluster
You can terminate an NC2 cluster if you do not want to use the cluster anymore.
Note: You should only terminate the clusters from the NC2 console and not from your public cloud console.
If you try to terminate the cluster or some nodes in the cluster from your cloud console, then NC2 will
continue to attempt to re-provision your nodes in the cluster.
You do not need to delete the license reservation when terminating an NC2 cluster if you intend to
use the same license reservation quantity for a cluster you might create in the future.
Note: Ensure that the cluster on which Prism Central is deployed is not deleted if Prism Central has multiple
Prism Elements registered with it.
Procedure
2. Go to the Clusters page, click the ellipsis in the row of the cluster you want to terminate, and click
Terminate.
3. In the Terminate tab, select the confirmation message to terminate the cluster.
Note: Multicast traffic is disabled by default in NC2. You can enable multicast traffic for each cluster so that
clusters running in AWS do not drop the multicast traffic egressing from AHV.
UVMs use Internet Group Management Protocol (IGMP) for subscribing to multicast groups.
When a subnet is added to the AWS Transit Gateway Multicast domain, AWS snoops all the
IGMP traffic on the ENI within the added subnets and maintains a multicast state to route the
multicast traffic to intended users. When multicast traffic is enabled for UVMs, clusters running
in AWS do not drop the multicast traffic egressing from AHV. A set of hosts that send and
receive the same multicast traffic is called a multicast group. The multicast traffic is routed to the
subscribed UVMs for a given multicast group based on the multicast membership table.
For more information on multicast concepts, see Multicast on transit gateways - Amazon VPC, and on how
to manage multicast domains and groups, see Managing multicast domains - Amazon VPC and Managing
multicast groups - Amazon VPC.
For multicast traffic to work in NC2, IGMP snooping must be enabled on AHV so that AHV can send
multicast traffic to only subscribed UVMs. If IGMP snooping is disabled, AHV will send multicast traffic to
all UVMs, which might be undesirable. This unwanted traffic results in consuming more computing power,
slowing down normal functions, and making the network vulnerable to security risks. With IGMP snooping
enabled, networks use less bandwidth and operate faster.
Note: A default virtual switch is created automatically when multicast is enabled. You can enable or disable
IGMP snooping only for the UVMs attached to the default virtual switch. You cannot enable or disable IGMP
snooping at the subnet level. All UVMs associated with the default virtual switch will have IGMP snooping
enabled or disabled. Multicast traffic is supported only for UVM subnets and not for CVM (management
cluster) subnets. For instructions, see Enabling or Disabling IGMP Snooping.
When a UVM with multicast traffic enabled is migrated to another NC2 node in the same cluster, multicast
traffic can be forwarded to that UVM even after migration.
The following figure shows a typical topology where both the multicast sender and receiver are in the same
VPC. Various scenarios with different multicast senders and receivers are described below.
Figure 82: Multicast traffic with the multicast sender and receiver are in the same VPC
In this example, the AWS transit gateway is configured on AWS Subnet X. The UVMs in Blue are in
Subnet X, and the UVMs in Green are in Subnet Y. EC2 instance can be any AWS-native (non-bare metal)
Table 20: Multicast traffic routing for multicast senders and receivers
Configured Multicast Configured Multicast IGMP Snooping Status Multicast Traffic Status
Sender Receivers
EC2-native UVM1, UVM2, UVM4 Enabled Traffic from the sender
(EC2-native) is received
by the configured
receivers UVM1, UVM2,
and UVM4.
EC2-native UVM1, UVM2, UVM4 Disabled Traffic from EC2-native is
received by:
Note: When
IGMP Snooping
is enabled,
traffic from the
multicast sender
is received only
by the multicast
receivers.
• Configured receiver
EC2-native instance
on Subnet Y.
• UVM9 because it
shares the subnet
with the sender
UVM8 on NC2-Host3.
The following figure shows an example topology where both the multicast sender and receiver are in
different VPCs.
The transit gateway is configured on Subnet X in VPC 1. The transit gateway allows connecting different
VPCs (for example, Subnet X in VPC 1 to Subnet Y in VPC 2). The following table shows how multicast
traffic will be routed for certain senders and receivers based on the IGMP snooping status.
Table 21: Multicast traffic routing for multicast senders and receivers
Configured Multicast Configured Multicast AOS IGMP Snooping Multicast Traffic Status
Sender Receiver/s Status
EC2-native2 / UVM3 UVM1, EC2-native1 Enabled Traffic from the sender
is received by the
configured receivers
UVM1 and EC2-native1.
Note: When
IGMP Snooping
is enabled,
traffic from the
multicast sender
is received only
by the multicast
receivers.
1. Run the following command on the CVM to enable IGMP snooping using aCLI.
net.update_virtual_switch virtual-switch-name enable_igmp_snooping=true
enable_igmp_querier=[true | false] igmp_query_vlan_list=VLAN IDs
igmp_snooping_timeout=timeout
The default timeout is 300 seconds. The AWS Transit Gateway acts as a multicast querier, and you
have the option to add additional multicast queries. You can set enable_igmp_querier variable as
true or false if you want to enable or disable AOS IGMP querier.
If you want to enable IGMP queries to only specific subnets, then you must specify the list of VLANs
for igmp_query_vlan_list. You can get the subnet to VLAN mapping using the net.list aCLI
command.
For instructions, see Enabling or Disabling IGMP Snooping.
IGMP snooping allows the host to track which UVMs need the multicast traffic and send the multicast
traffic to only those UVMs.
Note: While creating an AWS transit gateway, ensure that you select the Multicast support option. You
can enable the transit gateway for multicast traffic only when you create the transit gateway; you cannot
modify an existing transit gateway to enable multicast traffic.
5. Create an association between subnets in the transit gateway VPC attachment and the multicast
domain.
For more information, see Associating VPC attachments and subnets with a multicast domain.
6. Change the default IGMP version for all IGMP group members by running the following command on
each UVM that is intended to be a multicast receiver on the cluster:
sudo sysctl net.ipv4.conf.eth0.force_igmp_version=2
• Configure the inbound security group rule to allow traffic from the sender by specifying the sender’s
IP address.
• Configure the outbound security rule that allows traffic to the multicast group IP address.
Also, allow IGMP queries from the Transit Gateway; add the source IP address as 0.0.0.0/32, and the
protocol should be IGMP. For more information, see Multicast routing - Amazon VPC.
Procedure
2. Select the ellipsis button of a corresponding cluster and click Notification Center.
View AOS specific alerts from Prism Web Console.
4. To acknowledge a notification, in the row of a notification, click the corresponding ellipsis, and select
Acknowledge.
Procedure
Note: Ensure that you select the correct workspace from the Workspace dropdown list on the My
Nutanix dashboard. For more information on workspaces, see Workspace Management.
2. In the Clusters page, click the name of the cluster whose licensing details you want to display.
• Clusters_agents_upgrader
• Cluster_agent
• Host_agent
• Hostsetup
• Infra_gateway
• cloudnet
You can collect the logs either by using the Prism Element web console or Nutanix Cluster Check (NCC)
command line.
See Collecting Logs from the Web Console with Logbay for instructions about how to collect the logs by
using the Prism Element web console.
See Logbay Log Collection (Command Line) for instructions about how to collect the logs by using the
NCC command line.
You can collect the logs using logbay for a certain time frame and share the respective log bundle with
Nutanix Support to investigate the reported issue. You can upload logs collected by logbay on the Nutanix
SFTP or FTP server.
See Uploading Logbay Logs for more information on how to upload the collected logs.
In the event of a failure event that impacts multiple clusters, you can first recover a cluster that will be used
to recover Prism Central (if the failure event also impacted Prism Central) and then recover the remaining
failed clusters and their associated VMs, and Volume Groups from the backups in the S3 buckets. If the
failure is not AZ-wide and Prism Central of the impacted cluster is hosted on another cluster and that Prism
Central is not impacted, then you can restore the impacted cluster from that existing Prism Central.
Note: With Cluster Protect, all the VMs in a cluster are auto-protected using a single category value and
hence are recovered by a single Recovery Plan. A single Recovery Plan can recover up to 300 entities.
You can register multiple clusters with Prism Central in the same AWS AZ and enable Cluster Protect to
back up those clusters on that Prism Central.
Note: Currently, up to five NC2 clusters registered with one Prism Central in the same AWS AZ can be
protected by Cluster Protect.
You need to follow various protection and recovery procedures individually for each cluster that needs to
be protected and recovered. Prism Central can be recovered on any AWS cluster that it was previously
registered with. All UVMs and volume groups data are protected automatically to an Amazon S3 bucket
with a 1-hour Recovery Point Objective (RPO). Only the two most recent snapshots per protected entity
are retained in the S3 bucket.
When the cluster recovery process is initiated, the impacted clusters are marked as failed, and new
recovery clusters with the same configurations are created through the NC2 console. If you had previously
opted to use the NC2 console to create VPCs, subnets, and associated security groups, then NC2
automatically creates those resources again during the recovery process. Else, you will need to first
manually recreate those resources in your AWS console if you did not use NC2 to create them before the
failure event.
Cluster Protect can protect the following services and recover the associated metadata:
• Leap
• Flow Network Security
• Prism Pro (AIOps)
• VM management
• Cluster management
• Identity and Access Management (IAMv1)
• Categories
• Networking
The following services continue to run though these services are not protected, so data associated with
them is not recovered.
• Nutanix Files
• Self-Service
• LCM
• Nutanix Kubernetes Engine
• Objects
• Catalog
• Images
• VM templates
• Reporting Template
Note: You can use the same subnet or different subnets for Prism Central and MST.
• Clusters to be protected by Cluster Protect must be registered with the same Prism Central instance.
Note: Prism Central that manages protected clusters can also be protected by Prism Central Disaster
Recovery.
• Two new AWS S3 buckets must be manually created with the bucket names prefixed with nutanix-
clusters.
• Nutanix Guest Tools (NGT) must be installed on all UVMs.
• You must re-run the CloudFormation script if you have already added your AWS account in the NC2
console, so that the IAM role that has the required permissions to access only the S3 buckets with the
nutanix-clusters prefix comes into effect.
Note: If you already have run the CloudFormation template, you must run it again to use Cluster Protect
on newly deployed NC2 clusters.
Note: Ports 30900 and 30990 are opened by default while creating a new NC2 cluster and are required for
communication between AOS and MST to back up the VM and Volume Groups data.
• The Cluster Protect feature and Protection Policies cannot be used at the same time in the same cluster
to protect the data. If a user-created protection or DR policy already protects a VM or Volume Group,
it cannot also be protected with the Cluster Protect feature. If you need to use DR configurations for a
cluster, you must use those protection policies instead of Cluster Protect to protect your data. A new DR
policy creation fails if the cluster is already protected using the Cluster Protect feature.
• You cannot hibernate or terminate the clusters that are protected by the Cluster Protect feature. You
must disable Cluster Protect before triggering hibernation or termination.
• All clusters being protected must be in the same Availability Zone. Prism Central must be deployed
within the same Availability Zone as the clusters it is protecting.
• The Cluster Protect feature is available only for new cluster deployments. Any clusters created before
AOS 6.7 cannot be protected using this feature.
• A recovered VDI cluster might consume more storage space than the initial storage space consumed
by the protected VDI cluster. This issue might arise because the logic that efficiently creates VDI clones
is inactive during cluster recovery. This issue might also occur if there are multiple clones on the source
that are created from the same image. As a workaround, you can add additional nodes to your cluster if
your cluster runs out of space during the recovery process.
For more information, see https://portal.nutanix.com/kb/14558.
• ENI IP might sometimes overlap with an IP previously used by a VM on the failed cluster before
restoring that VM on a recovered cluster, which results in the VM being restored without a NIC attached.
Procedure
a. Create clusters in a new VPC or an existing VPC using the NC2 console.
Note: While deploying a cluster, ensure that you select the option to protect the cluster.
Note: You can protect your NC2 clusters even without protecting the Prism Central instance that is
managing these NC2 clusters; however, Nutanix recommends protecting your Prism Central instance
as well.
For more information, see Protecting UVM and Volume Groups Data.
Creating S3 Buckets
You must set up two new Amazon S3 buckets, one to back up the UVMs and volume group data, and
another to back up the Prism Central data. These S3 buckets must be empty and exclusively used only for
UVMs, volume groups, and Prism Central backups.
For instructions on how to create an S3 bucket, see the AWS documentation. While creating the S3
buckets, follow the NC2-specific recommendations:
Note: NC2 creates an IAM role with the required permissions to access S3 buckets with the
nutanix-clusters prefix. This IAM role is added to the CloudFormation template. You must run
the CloudFormation template while adding your AWS cloud account. If you already have run the
CloudFormation template, you must run it again to be able to use Cluster Protect on newly deployed NC2
clusters. For more information, see https://portal.nutanix.com/kb/15256.
If the S3 buckets do not have the nutanix-clusters prefix, the commands to protect Prism
Central and clusters fail.
• To deploy a new Prism Central: Perform the instructions described in Installing Prism Central (1-Click
Internet) to install Prism Central.
When deploying Prism Central, follow these recommendations:
• The Prism Central subnet must be a private subnet, and should only be used for Prism Central.
The Prism Central subnet must not be used for UVMs.
• When creating a DHCP pool in Prism Element, ensure that at least 3 IP addresses are reserved
to be used by MST.
• While deploying Prism Central, do not change the Microservices Platform (MSP) settings
because these are required to enable MST. You must choose Private network (defaults) in the
MSP configuration when prompted.
Note: You must not use managed networks for CMSP clusters with Cluster Protect enabled.
CMSP cluster is deployed in the VXLAN/kPrivateNetwork mode only.
• Modify the User management security group of the cluster hosting Prism Central to allow traffic
from the Internal Management subnet of the cluster hosting Prism Central to the Prism Central
Note: Ports 30900 and 30990 are opened by default while creating a new NC2 cluster and are
required for communication between AOS and Multicloud Snapshot Technology (MST) to back up
the VM and Volume Groups data.
• To register a cluster with Prism Central: After you deploy Prism Central on one of the NC2 clusters
in the VPC, you must register your remaining NC2 clusters in that VPC to Prism Central that you
deployed.
To register a cluster with Prism Central, follow the steps described in Registering a Cluster with
Prism Central.
Note: Any NC2 clusters that are not configured with the Prism Central that is hosting the Multicloud
Snapshot Technology will not be protected by Prism Central.
2. Configure the Prism Central protection and UVMs data protection. For more information, see Protecting
Prism Central Configuration and Protecting UVM and Volume Groups Data.
Note: In addition to protecting Prism Central to the S3 bucket, if your Prism Central instance is registered
with multiple NC2 clusters, then you must also protect Prism Central to one or more of the NC2 clusters
it is registered with. In this case, you must prioritize recovery of Prism Central configuration from another
NC2 cluster where Prism Central configuration was backed up if that NC2 cluster has not also been lost to a
failure event. For more information, see Protecting Prism Central.
The Prism Central configuration gets backed up to the S3 bucket once every hour and is available in the
pcdr/ folder in the S3 bucket.
UUID NAME
TIME-ELAPSED-SINCE-LAST-SYNC BACKUP-PAUSED BACKUP-PAUSED-REASON
TYPE
8xxxxxf5-3xxx-3xxx-bxxc-dxxxxxxxxxx6 https://nutanix-clusters-xxxx-pcdr-3node.s3.us-
west-2.amazonaws.com 30m59s false
kS3
The CLI shows sync in progress until the Prism Central data is synced to S3 for the first time. After
that, the CLI shows a non-zero time elapsed since the last sync. This confirms that the Prism Central
backup has been completed.
Note: When creating a DHCP pool in Prism Element, ensure that at least 3 IP addresses are reserved to
be used with the MST and another 3 for the Prism Central VM to be deployed (that are added as Virtual IPs
during Prism Central deployment). The static IPs reserved for the MST must be outside the DHCP range of
the MST subnet. Also, 4 IPs from the DHCP range of the MST subnet will be used by the MST VMs.
Procedure
1. Sign in to the Prism Central VM using the credentials provided while installing Prism Central.
What to do next
Back up all UVM and Volume Groups data from NC2 clusters. For more information, see Protecting UVM
and Volume Groups Data.
Note: You must run this command separately for each NC2 cluster you want to protect by specifying the
UUID for each NC2 cluster. This command also creates a recovery point for the protected entities.
Procedure
1. Sign in to the Prism Central VM using the credentials provided while installing Prism Central.
Note:
If the clustermgmt-cli command fails, it might be due to the clustermgmt-nc2 service did
not get properly installed. You can run the following command to verify if the clustermgmt-
nc2 service is installed:
nutanix@pcvm$ allssh "docker ps | grep nc2"
An empty response in the output of this command indicates that the clustermgmt-
nc2 service did not get installed properly. To overcome this issue, you must restart the
pc_platform_bootstrap service to install the clustermgmt-nc2 service. To do this, run the
following commands on the Prism Central VM using CLI:
nutanix@pcvm$ allssh "genesis stop pc_platform_bootstrap"
nutanix@pcvm$ allssh "cluster start"
You can rerun the following command to verify that the clustermgmt-nc2 service is installed:
nutanix@pcvm$ allssh "docker ps | grep nc2"
After you verify that the clustermgmt-nc2 service is successfully installed, you must rerun the
clustermgmt-cli command:
nutanix@pcvm$ clustermgmt-cli deploy-cloudSnapEngine -b S3_bucket_name -r aws
region -i IP1,IP2,IP3 -s Private Subnet
The Protection Summary page includes an overview of the protection status of all clusters. It also
provides details about the VMs that are lagging behind their RPO. You can see the cluster being
protected and the target being the AWS S3 bucket.
The Recovery Points of the VM show when the VM was last backed up to S3. Only the two most recent
snapshots per protected entity are retained in the S3 bucket.
1. Run the following command to check the Prism Central protection status by listing protection targets:
nutanix@pcvm$ pcdr-cli list-protection-targets
As Prism Central can be protected to S3 and its registered clusters, you need to know the protection
target, S3 bucket, or one of the NC2 clusters where Prism Central configuration is backed up.
This command lists information about the Prism Central protection targets with their UUIDs. These
UUIDs are different from cluster UUIDs and are required when running the unprotect-cluster
command.
2. Run the following command on the Prism Central VM to disable Prism Central protection:
nutanix@pcvm$ pcdr-cli unprotect -u protection_target_uuid
Use the protection target UUID that you derived using the list-protection-targets command in
Step 1.
A warning is issued if Cluster Protect is enabled for any cluster managed by this Prism Central and asks
for your confirmation to proceed with unprotecting Prism Central. You can unprotect Prism Central even
if Cluster Protect is enabled for any cluster. Nutanix recommends keeping Prism Central protected for
seamless recovery of NC2 clusters.
Note: If the failure is not AZ-wide and Prism Central that is managing one of the failed clusters is hosted
on another cluster, and that cluster is not impacted, then you can restore the failed cluster from that
running Prism Central.
3. Run the following command to disable cluster protection for any NC2 cluster:
nutanix@pcvm$ clustermgmt-cli unprotect-cluster -u cluster_uuid
Replace cluster_uuid with the UUID of the NC2 cluster for which you want to disable Cluster Protect.
You can find the UUID listed as Cluster ID under General in the cluster Summary page in the NC2
console.
Procedure
2. On the Clusters page, click the name of the cluster you want to set to the Failed state.
3. Ensure that the cluster Summary page shows the Cluster Protect field under General settings as
Enabled.
8. Go to the cluster Summary page to validate that the Cluster Recovery workflow is displayed.
What to do next
After you set the cluster to the Failed state, redeploy the cluster. See Recreating a Cluster for more
information.
You must figure out on your own when the failure event, such as an AWS AZ failure, impacting your cluster
is over so that they can start the cluster recovery process. Nutanix does not indicate when an AWS AZ has
recovered enough for your recovery cluster to be deployed.
Recreating a Cluster
When a protected cluster fails, and you set the cluster state to Failed, you need to redeploy the cluster.
Follow these steps to redeploy the cluster:
Procedure
3. On the cluster Summary page, under Cluster Recovery, click Start Cluster Recovery.
• Under General:
Note: The recovery cluster name will be different than the failed cluster. It will be enforced by the
NC2 console during recovery cluster creation.
• Cloud Account, Region, and Availability Zone: These configurations from the failed cluster
that you are recreating are displayed. Your recovery cluster will use the same configuration.
• Under Network Configuration:
• When manually created VPC and subnets were used to deploy the failed cluster, the previously
used resources are displayed. You should recreate the same VPC and subnets that you had
previously created in your AWS console.
• When VPC and subnets created by the NC2 console were used to deploy the failed cluster, the
NC2 console will automatically recreate the same VPCs and subnets during the cluster recovery
process.
6. Review the cluster summary on the Summary page and then click Recreate Cluster.
What to do next
After recreating the cluster, you must recover Prism Central if it was running on a cluster that suffered a
failure event and user VM or volume groups data. See Recovering Prism Central and User Data.
Procedure
2. On the redeployed cluster, run the following CLI command to recover Prism Central from the S3 bucket
where Prism Central data was backed up:
nutanix@pcvm$ pcdr-cli recover -b S3_bucket -r AWS_region -n PC-Subnet
Replace the variables with their appropriate values as follows:
3. Track the Prism Central recovery status in the Tasks section on the recreated Prism Element console.
Note: The Prism Central recovery might take approximately four hours. Also, the recovered Prism
Central and original Prism Central are of the same version.
What to do next
After you recover Prism Central, register any newly created NC2 clusters with the recovered Prism Central.
If the clusters that were registered with Prism Central prior to the recovery of Prism Central did not suffer
any failure, they will be auto-registered with the recovered Prism Central.
Note: After the cluster recovery is complete, the failed Prism Element remains registered with recovered
Prism Central. To remove this Prism Element, unregister the Prism Element from Prism Central. For detailed
instructions, see the KB article 000004944.
Note: The configuration data for the recovery Prism Central must be recovered from the Prism Central S3
bucket before recovering the UVM data on the recovery clusters. For more information, see Recovering
Prism Central.
Procedure
1. Sign in to the Prism Central VM using the credentials provided while installing Prism Central.
• S3_bucket: the S3 bucket where you want to protect the user VMs data.
• AWS_region: the AWS region where the S3 bucket is created.
• IP1,IP2,IP3: the static IPs reserved for MST.
Note: These IPs can be different than the IPs used earlier while deploying MST prior to cluster
failure.
• PC-Subnet: the AWS private subnet configured for the recovery Prism Central.
• NC2 clusters are recreated. For more information, see Recreating a Cluster.
• Prism Central is redeployed.
Note: The configuration data for the recovery Prism Central must be recovered from the Prism
Central S3 bucket before recovering the UVM data on the recovery clusters. For more information, see
Recovering Prism Central and MST.
• Multicloud Snapshot Technology is redeployed. For more information, see Recovering Prism Central
and MST.
Note: The UVM subnet names on the failed and recovered clusters must be the same for the correct
mapping of subnets in the recovery plan. If the names do not match correctly, the cluster recovery might
proceed, but the VMs are recovered without the UVM subnet attached. You can manually attach the subnet
post-recovery. If there are multiple UVM subnets, then all UVM subnets must be recreated with the same
names for the correct mapping of subnets between failed and recovered clusters.
Procedure
1. Sign in to the Prism Central VM using the credentials provided while installing Prism Central.
a. Run the following command to list all the subnets associated with the protected Prism Elements:
nutanix@pcvm$ clustermgmt-cli list-recovery-info -u UUID_OldPE
Replace UUID_OldPE with the UUID of the old NC2 cluster.
A list of subnets is displayed.
b. Recreate these subnets on the recovery Prism Elements in the same way they were created in the
first place.
For more information, see Creating a UVM Network.
4. Run the following command to create a Recovery Plan to restore UVM data from the S3 buckets.
nutanix@pcvm$ clustermgmt-cli create-recovery-plan -o UUID_OldPE -n UUID_NewPE
Replace the variables with their appropriate values as follows:
Note: You must perform this step for each NC2 cluster you want to recover.
a. Sign in to Prism Central using the credentials provided while installing Prism Central.
b. Go to Data Protection > Recovery Plans.
You can identify the appropriate recovery plan to use by looking at the recovery plan name. It is in
the format: s3-recovery-plan-UUID_OldPE
Once the failover is complete, your UVM and Volume Groups data is recovered on the recovery
Prism Element.
Once the recovery plan is finished, your VMs are recovered.
7. Run the following command on all NC2 clusters that are recovered after the cluster failure to remove the
category values and protection policies associated with the old clusters that no longer exist.
nutanix@pcvm$ clustermgmt-cli finalize-recovery -u UUID_OldPE
Replace UUID_OldPE with the UUID of the old NC2 cluster.
Run the following command on the Prism Central VM to reprotect the cluster:
nutanix@pcvm$ clustermgmt-cli protect-cluster -u UUID_NewPE
Replace UUID_NewPE with the UUID of the new NC2 cluster.
For example:
nutanix@pcvm$ pcdr-
cli unprotect -u
8xxxxxx5-3xx4-3xx1-
bxxc-dbxxxxxxx0b6
Create a recovery nutanix@pcvm$ Prism Central VM -h, --help Help for create-
plan which can clustermgmt-cli recovery-plan
be executed from create-recovery- command.
Prism Central UI to plan [flags]
recover a cluster. -n, -- UUID of the new
For example: new_cluster_uuid recovery NC2
nutanix@pcvm$ string cluster.
clustermgmt-cli
-o, -- UUID of the old,
create-recovery-
plan -o 0xxxxxx6- old_cluster_uuid failed NC2 cluster.
cxxc-dxxx-8xxf- string
dxxxxxxxxx99 --output string Supported output
-n 0xxxxxxe-
formats: ['default',
dxxd-fxxx-fxxe-
cxxxxxxxxxe5 'json'] (default
"default")
Deploy MST, nutanix@pcvm$ Prism Central VM -b, --bucket string Name of the S3
which can be used clustermgmt- bucket that will be
to protect NC2 cli deploy- used to store the
clusters. cloudSnapEngine backup of NC2
[flags] clusters.
For example: -h, --help Help for the
nutanix@pcvm$ deploy-
clustermgmt- cloudSnapEngine
cli deploy- command.
cloudSnapEngine -b
nutanix-clusters- --recover Deploys MST using
xxxxx-xxxx-xxxxx Cloud Clusters (NC2) | Cluster Protect configuration| 174
Configuration
old
-r us-west-2 -i data, if available on
10.0.xxx.11,10.0.xxx.12,10.0.xxx.13 Prism Central. If old
Purpose Command Command Flags Description
available on
-r, --region string Name of the AWS
region where the
provided S3 bucket
exists.
-i, --static_ips Comma-separated
strings list of 3 static IPs
that are part of
the same subnet
specified by the
subnet_name flag.
-s, -- Name of the subnet
subnet_name which can be used
string for MST VMs.
For example:
nutanix@pcvm$
clustermgmt-
cli delete-
cloudSnapEngine
Mark completion nutanix@pcvm$ Prism Central VM -u, --cluster_uuid UUID of the old
of recovery of a clustermgmt-cli string NC2 cluster.
cluster. finalize-recovery
[flags] -h, --help Help for the finalize-
recovery command.
For example:
--output string Supported output
nutanix@pcvm$ formats: ['default',
clustermgmt-cli 'json'] (default
finalize-recovery
"default")
-u 0xxxxxxx-
cxxc-dxxx-8xxx-
dxxxxxxxxxx9
Get a list nutanix@pcvm$ Prism Central VM -u, --cluster_uuid UUID of the NC2
of recovery clustermgmt-cli string cluster.
information, such as list-recovery-info
-h, --help Help for the list-
subnets that were [flags]
recovery-info
available on the For example:
original(failed) NC2 command.
cluster. nutanix@pcvm$ --verbose With the verbose
clustermgmt-
flag, a detailed
cli list-
recovery-info -u JSON output is
00xxxxxb-0xxd-8xxx-6xx4-3xxxxxxxxx7d returned. If the
verbose flag is not
specified, only the
important fields,
such as subnet
name, IP Pool
ranges, and CIDR,
are returned.
Cloud Clusters (NC2) | Cluster Protect Configuration | 176
Purpose Command Command Flags Description
available on
--output string Supported output
formats: ['default',
'json'] (default
"default")
Protect clusters nutanix@pcvm$ Prism Central VM -u, --cluster_uuid NC2 cluster UUID
against AZ failures clustermgmt-cli string
by backing up the protect-cluster
-h, --help Help for the protect-
clusters in AWS S3. [flags]
cluster command.
For example:
-l, -- Local snapshot
nutanix@pcvm$ local_snapshot_count
retention count. The
clustermgmt-cli int default count is 2.
protect-cluster
-u 00xxxxxe- -r, --rpo int Protection RPO
dxxx-fxxx-fxxe- in minutes. The
cxxxxxxxxxe5 default RPO is 60
minutes.
--output string Supported output
formats: ['default',
'json'] (default
"default")
Unprotect a cluster. nutanix@pcvm$ Prism Central VM -u, --cluster_uuid UUID of the NC2
clustermgmt-cli string cluster.
unprotect-cluster
[flags] -h, --help Help for the
unprotect-
For example: cluster command.
nutanix@pcvm$ --output string Supported output
clustermgmt-cli
formats: ['default',
unprotect-cluster
-u 000xxxx6- 'json'] (default
cxxx-dxx0-8xxx- "default")
dxxxxxxxx999
• NC2 Console: Use the NC2 console to create, hibernate, resume, update, and terminate a NC2 cluster
running on AWS.
• Prism Element Web Console: Use the Prism Element web console to manage routine Nutanix tasks
in a single console. For example, creating a user VM. Unlike Prism Central, Prism Element is used to
manage a specific Nutanix cluster.
For more information on how to sign into the Prism Element web console, see Logging into a Cluster by
Using the Prism Element Web Console.
For more information on how to manage Nutanix tasks, see Prism Web Console Guide.
• Prism Central Web Console: Use to manage multiple Nutanix clusters.
For more information on how to sign into the Prism Central web console, see Logging Into Prism
Central.
For more information on how to manage multiple NC2, see Prism Central Infrastructure Guide.
NC2 Console
The NC2 console displays information about clusters, organization, and customers.
The following section explains about all the tasks you can perform and view from this console.
Main Menu
The following options are displayed in the main menu at the top of the NC2 console:
Navigation Menu
The navigation menu has three tabs: Clusters, Organizations, and Customers. The selected tab is
displayed in the top-left corner. For more information, see Navigation Menu on page 179.
• Circle icon displays ongoing actions performed in a system that takes a while to complete.
For example, actions like creating a cluster or changing cluster capacity.
Circle icon also displays progress of each ongoing task and a success message appears if the task is
complete or an error message appears if the task fails.
• Gear icon displays the source details of each task performed.
For example, account, organization, or customer.
Notifications
• Bell icon displays notifications if some event in the system occurs or if there is a need to act and resolve
an existing issue.
Warning: You can choose to Dismiss notifications from the Notification Center. However, the
dismissed notifications no longer appear to you or any other user.
• Gear icon displays source details and a tick mark to acknowledge notifications.
• Drop-down arrow to the right of each notification displays more information about the notification.
Note: If you want to receive notifications about a cluster that is not created by you, you must be an
organization administrator and subscribe to notifications of respective clusters in the Notification Center. The
cluster creator is subscribed to notifications by default.
User Menu
The Profile user name option from the drop-down list provides the following opitons:
• General: Edit your First name, Last name, Email, and Change password from this screen. This
screen also displays various roles assigned.
• Preferences: Displays enable or disable slider options based on your preference.
• Storage providers: Displays the storage options with various storage providers.
• Advanced: Displays various assertion fields and values.
• Notification Center: Displays the list of Tasks, Notifications, and Subscriptions.
Navigation Menu
The navigation menu has three tabs on the top; Clusters, Organizations, and Customers, two tabs in the
bottom; Documentation and Support.
Clusters
• Audit Trail: Displays the activity log of all actions performed by the user on a specific cluster.
• Users: Displays the screens for user management like User Invitations, Permissions,
Authentication Providers.
• Notification Center: Displays the complete list of all the tasks and notifications.
• Update Configuration: Displays the screens to update the settings of clusters.
• Update Capacity: Displays a screen to update the resource allocation of clusters.
• Hibernate: Opens a dialog box for Cluster Hibernation or a Resume option appears if the cluster
is already hibernated.
• Terminate: Displays a screen to delete the cluster.
Organizations
• Audit Trail: Displays the activity log of all actions performed on a specific organization.
• Users: Displays the screens for user management like User Invitations, Permissions,
Authentication Providers.
• Sessions: Displays the basic details of the organization and information about the terminating the
cluster.
• Notification Center: Displays the complete list of all Tasks and Notifications.
• Cloud accounts: Displays the status of the Cloud Account if it is active (A-Green) or inactive (I-
Red).
The ellipsis icon against each cloud account displays the following options:
• Add regions: Select this option to update the regions in which the cloud account can deploy
clusters to.
• Update: Select this option to create a new stack or update an existing stack.
• Deactivate: Select this option to deactivate the cloud account.
• Update: Displays the options to update settings of organizations.
Customers
• Audit Trail: Displays the activity log of all actions performed on a specific cluster.
• Users: Displays the screens for user management like User Invitations, Permissions,
Authentication Providers.
• Notification Center: Displays the complete list of all tasks and notifications.
• Cloud accounts: Displays the status of the Cloud Account if it is active (A-Green) or inactive (I-
Red).
• Update: Displays the options to update settings of customers.
Documentation
Directs you to the documentation section of NC2.
Support
Directs you to the Nutanix Support portal.
Audit Trail
Administrators can monitor user activity using the Audit Trail. Audit Trail provides administrators with an
audit log to track and search through account actions. Account activity can be audited at all levels of the
NC2 console hierarchy.
You can access the Audit Trail page for an Organization or Customer entity from the menu button to the
right of the desired entity.
The following figure illustrates the Audit Trail at the organization level.
Under the Audit Trail section header, you can search the audit trail by first name, last name, and email
address. You can also click the column titles to sort the Audit Trail by ascending or descending order.
If you want to search for audit events within a certain period, click the date range in the upper right corner
of the section. Set your desired period by clicking on the starting and ending dates in the calendar view.
You can filter your results using the filter icon in the top right corner by specific account action.
You can download the details of your Audit Trail in CSV format by clicking the Download CSV link in the
upper right corner. The CSV will provide all Audit Trail details for the period specified to the left of the
download link.
Notification Center
Admins can easily stay up to date regarding their NC2 resources with the Notification Center. Real-time
notifications are displayed in a Notification Center widget at the top of the NC2 console. The Notification
Center displays two different types of information: tasks and notifications. The information displayed in the
Notification Center can be for organizations or customer entities.
Note: Customer Administrators can see notifications for all organizations and accounts associated with
the tenant by navigating to the Customer or Organization dashboard from the initial NC2 console view and
clicking Notification Center.
Tasks
Tasks (bullet list icon) show the status of various changes made within the platform. For example, creating
an account, changing capacity settings, and so on trigger a task notification informing the admin that an
event has started, is in progress, or has been completed.
Notifications
Notifications (bell icon) differ from tasks; notifications notify administrators when specific events happen.
For example, resource limits, cloud provider communication issues, and so on.). There are three types of
notifications: info, warning, or error.
Dismiss Tasks and Notifications
You can dismiss tasks or notifications from the Notification Center widget by selecting the task or
notification icon and click the dismiss (x) button inside the event.
Dismissing an event only dismisses the task or notification for your console view; other subscribed admins
still see the event.
Acknowledge Notifications
You can click the check mark icon to acknowledge and dismiss a notification for all users subscribed to that
resource. Acknowledging a notification removes it from the widget, but the notification is still available on
the Notification Center page.
Note: Acknowledging a notification will dismiss it for all administrators subscribed to the same resource.
User Roles
The NC2 console uses a hierarchical approach to organizing administration and access to accounts.
The NC2 console has the following entities:
• Customer: This entity is the highest business entity in the NC2 platform. You create multiple
organizations under a customer and then create clusters within an organization. When you sign up for
NC2, a Customer entity is created for you. You can then create an Organization, add a cloud (Azure
or AWS) account to that organization, and create clusters in that organization. You cannot create a new
Customer entity in your NC2 platform.
• Organization: This entity allows you to set up unique environments for different departments within
your company. You can create multiple clusters within an organization. You can separate your clusters
based on your specific requirements. For example, create an organization Finance and then create a
cluster in the Finance organization to run only your finance-related applications.
Users can be added from the Cluster, Organization, and Customer entities. However, the user roles
that are available while adding users vary based on whether the users are invited from the Cluster,
Organization, and Customer entities. Administrators can grant permissions based on their own level of
access. For example, while a customer administrator can assign any role to any cluster or organization
under that customer entity, an organization administrator can only grant roles for that organization and the
clusters within that organization.
The following user roles are available in NC2.
Role Description
Customer Administrator Highest level of access. Customer administrators can create
and manage multiple organizations and clusters. Customer
administrators can also modify permissions for any of the user
roles.
Customer Auditor Customer Auditor users have read only access to functionality at
the customer, organizations, and account levels.
Cluster Administrator Cluster Administrator can access and manage any clusters
assigned to them by the Organization or Customer administrators.
Cluster Admin can also open, close, or extend a support tunnel for
the Nutanix Support team.
Cluster Super Admin Cluster Super Admin can open, close, or extend a support tunnel
for the Nutanix Support team.
Cluster Auditor Cluster Auditor users have read only access to the clusters under
the organization.
Cluster User Cluster User can access a specific cluster assigned to them by the
Cluster, Organization or Customer Administrator.
See the Local User Management section of the Nutanix Cloud Services Administration Guide for more
information about the following:
Note: The user roles described in the Local User Management section of the Nutanix Cloud Services
Administration Guide guide are not applicable to NC2. For the user roles in NC2, see the user roles
described in this section.
See the Nutanix Cloud Services Administration Guide for more information about authentication
mechanisms, such as multi-factor authentication and SAML authentication.
Procedure
3. Click the ellipsis icon against the desired customer entity, and click Users.
The Authentication tab displays the identity authentication providers that are currently enabled for your
account, and the relevant tabs for the enabled authentication providers are displayed. The NC2 account
administrator must have first unlocked the Enforce settings slider.
Perform the following steps to invite users based on the authentication provider.
• Application Id
• Auth provider metadata: URL or XML
• Metadata URL or Metadata XML
• Integration Name
• Custom Label
• Authentication token expiration
• Signed response
• Signed assertion
d. Click Add.
To add SAML 2 Permission:
a. Click the SAML 2 Permission tab. The SAML 2 Permissions dialog appears.
b. Click Add Permission. The Create A SAML2 Permission dialog appears.
• For provider: Select the SAML2 Provider you are designating permissions for.
• Allow Access:
• Always: Once the user is authenticated, they have access to the role you specify – no
conditions required.
• When all conditions are satisfied: The user must meet all conditions specified by the
administrator to be granted access to the role specified.
• When any condition is satisfied: The user can meet any conditions specified by the
administrator to be granted access to the role specified.
• Conditions: Specify your assertion claims and their values which correspond with the roles you
wish to grant.
• Grant roles: Select the desired roles you wish to grant to your users. You can add multiple role
sets using the Add button.
d. Click Save.
e. To update the SAML 2 permissions of the users in your account, click the SAML 2 Permissions tab.
The SAML 2 Permissions page displays the list of all users in your account.
f. Click the ellipsis icon against the user you want to edit the SAML 2 permissions for, and then click
Update. The Update a rule dialog appears.
9. To invite users with Secure Anonymous: You can create many users without email invitation or
activation. Mass user creation can be used to deliver training and certification tests to end users who
Procedure
3. Click the ellipsis icon against the organization entity, and then click Users.
• Full access to this organization and its accounts: Grants NC2 support engineers the same level
of access as a Customer Administrator.
• Full access without ability to start sessions and manage users: NC2 support engineers may not
start sessions to your workload VMs.
• No Access: NC2 support engineers have no access to your customer and organization(s).
6. If you choose to give full access, then you can choose to give full access to specific NC2 specialists.
Click Add Personnel and then enter the email address of the NC2 specialist.
To revoke access, click the trashcan symbol listed to the right of the Nutanix staff member you would
like to remove from the Authorized Nutanix Personnel list. Click Save to apply your changes.
Note: Ensure that you select the correct workspace from the Workspace list on the My Nutanix
dashboard. For more information on workspaces, see Workspace Management.
b. In the My Nutanix dashboard, go to the API Key Management tile and click Launch.
If you have previously created API keys, a list of keys is displayed.
c. Click Create API Keys to create a new key.
The Create API Key dialog appears.
• Name: Enter a unique name for your API key to help you identify the key.
• Scope: Select the NC2 scope category under Cloud from the Scope list.
• Admin: Create or delete a cluster and all permissions that are assigned to the User role.
• User: Manage clusters, hibernate and resume a cluster, update cluster capacity, and all
permissions that are assigned to the Viewer role.
• Viewer: View account, organization, cluster, and tasks on the NC2 console.
e. Click Create.
The Created API dialog is displayed.
Note: You cannot recover the generated API key and key ID after you close this dialog.
For more details on API Key management, see the API Key Management section in the Licensing
Guide.
Note: This step uses Python to generate a JWT token. You can use other programming languages, such
as Javascript and Golang.
b. Replace the API Key and Key ID in the following Python script and then run it to generate a JWT
token. Also, you can specify expiry time in seconds for the JWT token to remain valid. In the
requesterip attribute, enter the requester IP.
from datetime import datetime
from datetime import timedelta
import base64
import hmac
import hashlib
import jwt
def generate_jwt():
curr_time = datetime.utcnow()
payload = {
"aud": aud_url,
"iat": curr_time,
"exp": curr_time + timedelta(seconds=120),
"iss": key_id,
"metadata": {
"reason": "fetch usages",
"requesterip": "enter the requester IP",
"date-time": curr_time.strftime("%m/%d/%Y, %H:%M:%S"),
"user-agent": "datamart"
}
}
signature = base64.b64encode(hmac.new(bytes(api_key, 'UTF-8'), bytes(key_id,
'UTF-8'), digestmod=hashlib.sha512).digest())
token = jwt.encode(payload, signature, algorithm='HS512',
headers={"kid": key_id})
print("Token (Validate): {}" .format(token))
generate_jwt()
c. A JWT token is generated. Copy the JWT token on your system for further use. The JWT token can
be used as an Authorization header when validating the API call. The JWT token remains valid for
the duration that you have specified.
Costs
Costs for deploying an NC2 infrastructure include the following:
1. AWS EC2 bare-metal instances: AWS sets the cost for EC2 bare-metal instances. Engage with AWS or
see their documentation about how your EC2 bare-metal instances are billed. For more information, see
the following links:
• EC2 Pricing
• AWS Pricing Calculator
2. NC2 on AWS: Nutanix sets the costs for running Nutanix clusters in AWS. Engage with your Nutanix
sales representatives to understand the costs associated with running Nutanix clusters on AWS.
Sizing
You can use the Nutanix Sizer tool to enable you to create the optimal Nutanix solution for your needs. See
the Sizer User Guide for more information.
Capacity Optimizations
The Nutanix enterprise cloud offers capacity optimization features that improve storage utilization and
performance. The two key features are compression and deduplication.
Compression
Nutanix systems currently offer the following two types of compression policies:
Inline
The system compresses data synchronously as it is written to optimize capacity and to maintain high
performance for sequential I/O operations. Inline compression only compresses sequential I/O to avoid
degrading performance for random write I/O.
Post-Process
For random workloads, data writes to the SSD tier uncompressed for high performance. Compression
occurs after cold data migrates to lower-performance storage tiers. Post-process compression acts only
when data and compute resources are available, so it does not affect normal I/O operations.
Nutanix recommends that you carefully consider the advantages and disadvantages of compression for
your specific applications. For further information on compression, see the Nutanix Data Efficiency tech
note.
• Key: nutanix:clusters:cluster-uuid
• Value: UUID of the cluster created in AWS
You must add and activate the nutanix:clusters:cluster-uuid tag as a cost allocation tag in AWS, so that
Cost Governance can successfully display the cost analytics of Nutanix clusters in AWS.
For more information about setting up and using Cost Governance, see the NCM Cost Governance
documentation.
Procedure
3. In AWS, add and activate the NC2 tag nutanix:clusters:cluster-uuid as a user-defined tag.
See Activating User-Defined Cost Allocation Tags section in the AWS documentation.
The tag activates after 24 hours.
Note: Add and activate the tag by using the payer account of your organization in AWS.
4. Sign in to the Cost Governance console to see the cost analytics of your Nutanix clusters in AWS.
Procedure
3. Select AWS and your AWS account in the cloud and account selection menu.
Note: Nutanix Cloud Clusters (NC2) supports File Analytics versions 2.2.0 and later.
See the Files Analytics documentation on the Nutanix Support portal for more information about File
Analytics.
In the Prism Element web console, go to the Files page and click File Analytics.
If you are accessing the VM from inside the VPC, you can access the VM by using the File Analytics IP
address. If you want to access the file analytics VM from outside the VPC, you must configure a load
balancer that has a public IP address.
Note: NC2 recommends that you enable File Analytics for a desired file server before you add a load
balancer to the File Analytics VM.
Health Check
Nutanix provides robust mechanisms to monitor the health of your clusters by using Nutanix Cluster Check
and health monitoring through the Prism Element web console.
You can use the NC2 console to check the status of the cluster and view notifications and logs that the
NC2 console provides.
For more information on how to assess and monitor the health of your cluster, See Health Monitoring.
Routine Maintenance
This section has more information about routine maintenance activities like monitoring certificates, software
updates, managing licenses and system credentials.
Monitoring Certificates
You must monitor your certificates for expiration. Nutanix does not provide a process for monitoring
certificate expiration, but AWS provides an AWS CloudFormation template that can help you set up alarms.
See acm-certificate-expiration-check for more information. Follow the AWS best practices for certificate
renewals.
• Licensed Clusters. Displays a table of licensed clusters including the cluster name, cluster UUID,
license tier, and license metric. NC2 clusters with AOS and NCI licensing appear under Licensed
Clusters.
• Cloud Clusters. Displays a table of licensed Nutanix Cloud Clusters including the cluster name, cluster
UUID, billing mode, and status. NC2 clusters with AOS licensing appear under Cloud Clusters. NCI-
licensed clusters do not appear under Cloud Clusters.
To purchase and manage the software licenses for your Nutanix clusters, see the License Manager Guide.
Emergency Maintenance
The NC2 software can automatically perform emergency maintenance if you configure redundancy factor 2
(RF2) or RF3 on your cluster to protect against rack failures and synchronous or asynchronous replication
to protect against AZ failures. For node failures, NC2 detects a node failure and replaces the failed node
with a new node.
Hosts in a cluster are deployed by using a partition placement group with seven partitions. A placement
group is created for each host type and the hosts are balanced within the placement group. The placement
group along with the partition number is translated into a rack ID of the node. This enables AOS Storage to
place meta data and data replicas in different fault domains.
A redundancy factor 2 (RF2) configuration of the cluster protects data against a single-rack failure and an
RF3 configuration protects against a two-rack failure. Additionally, to protect against multiple correlated
failures within a data center and an entire AZ failure, Nutanix recommends that you set up synchronous
replication to a second cluster in a different AZ in the same Region or an asynchronous replication to an AZ
in a different Region.
See Data Protection and Recovery with Prism Element for more information.
Note: NC2 detects a node failure in a few minutes and brings a replaced node online in approximately one
hour; this duration varies depending on the time taken for data replication, the customer’s specific setup, and
so on.
• LCM based AHV upgrades in NC2 clusters timeout because of bare metal EC2 instances taking long
time to boot. See KB Article 000013370.
• Cluster resume workflow hangs when S3 connectivity is lost on one of the CVMs. KB Article
000013499.
Procedure
2. Select one to five stars to rate the page you referred to. Here, a single star means poor, and five stars
mean excellent.
Support
You can access the technical support services in a variety of ways to troubleshoot issues with your Nutanix
cluster. See the Nutanix Support Portal Help for more information.
Nutanix offers a support tier called Production Support for NC2.
See Product Support Programs under Cloud Services Support for more information about Production
Support tier and SLAs.
AWS Support
Nutanix recommends that you sign up for an AWS Support Plan subscription for technical support of the
AWS entities such as Amazon EC2 Instances, VPC, and more. See AWS Support Plan Offerings for more
information.