- • Recovery plans do not support overlapping subnets in a network-mapping configuration.
Do
not create virtual networks with the same name or overlapping IP address ranges.
o How to retain static IP at the DR, should advance setting be used or not
- Let’s check manual method too for replication
- Pause and resume sync
- Decoupling of the Entities
The decoupled stretch state enables the system to identify the stale entities and
automatically delete them. The decoupling operation, after the unplanned failover, prevents
the stale entities on the primary cluster from consuming additional system resources (by
powering off the guest VMs or removing VM attachments to volume group) and cleans up
those entities.
Note: The decoupling stretch state is set only when both the primary and recovery clusters support
the
decoupling of the entities. Suppose some clusters in your deployment are upgraded to an AOS version
that
supports the decoupled state while some clusters are running on unsupported AOS versions. In that
case,
the entities shall not go into the decoupled state.
The following happens after the unplanned failover.
• The primary cluster goes into the decoupled state pre-emptively when there is network
isolation.
• After decoupling, the entities are automatically turned off and deleted when the primary
cluster becomes
functional, and reverse synchronization happens from the recovery cluster to the primary
cluster.
- Convert from nearsync to sync or async
- Autonomous scheduling
To protect and recover your entities, you can use the autonomous scheduling option of
nearsync replication schedules. Autonomous scheduling generates a continuous stream of
local recovery points for 15-minute time slots. You can select the time for recovery point
generation using a slider on the Prism Central web console. The slider length indicates the
time slot for the stream and is set by default to 15 minutes.
For example, you create a protection policy with a four-minute schedule to protect an entity.
The protection policy starts generating the recovery points, and the recovery points are
listed in the Recovery Points tab under a VM list. A 15-minute stream of local recovery points
is generated, and you can click Pick a Time to open a slider widget that allows you to select
a time with a precision of one minute. Then, click Clone.
Cloning generates a clone of that VM for that exact precise time. It is listed in the Recovery
Points tab with an auto-generated numerical name and the exact time stamp to help you
identify the recovery points you created.
- Retention check (how many recovery points they keep and required storage calculations) with
linear and rollup.
snapshot reserve = (frequency of snapshots × full change rate per
frequency) + (change rate per frequency × # of snapshots in a Curator
scan × 0.1)
EX: snapshot reserve = (frequency of snapshots × change rate per
frequency) +
(change rate per frequency × # of snapshots in a full curator scan ×
0.1)
= (10 × 35,980 MB) + (35,980 MB × 1 × 0.1)
= 359,800 + (35,980 × 1 × 0.1)
= 359,800 + 3,598
= 363,398 MB = 363 GB
- Automatic in protection policy but manual in recovery plan
- Application consistent recovery point.
Do not take Nutanix-enabled application-consistent recovery points while using any third-party
backup provider-enabled VSS snapshots (for example, Veeam).
- need to check pre-freeze and post-thaw scripts for Microsoft SQL with wally will it conflict
with backup
Applications Supporting Application-Consistent Recovery Points without Scripts Only the
applications listed on Compatibility and Interoperability Matrix support application consistent
recovery points without pre-freeze and post-thaw scripts:
- stretched layer 2 to be checked
- DNS Re-configuration
- Manual replication for test
- Networking Requirements page 196 for retain static IP address.
- • Nutanix VM mobility drivers are installed in the protected guest VMs. Nutanix VM mobility
drivers are required to access the guest VMs after failover.
- • You cannot migrate vDisks of a VM that is protected by a protection policy. For more
information, see Disaster Recovery Considerations in Live vDisk Migration Across Storage
Containers. https://portal.nutanix.com/page/documents/details?targetId=AHV-Admin-
Guide:ahv-vdisk-migration-c.html
- Recommendations for DR Configuration between On-Prem AZs check and list them in design
- Operation guide for add remove pause failback failover ...etc. (let’s check operation guide)
- Self-service restores
- Manual clone, revert
- Install NGT
- How to backup prism central for recovery in case of failure
-
Note
- If you unpair the AZs while the guest VMs in the Nutanix clusters are still in synchronization,
the Nutanix cluster becomes unstable. Therefore, disable synchronous replication and clear
stale stretch parameters on both the primary and recovery clusters (Prism Element) before
unpairing the AZs.
- The Nutanix cluster must also have sufficient memory to support a hot add of memory to all
Prism Central nodes when you enable Nutanix Disaster Recovery from Prism Central. A small
Prism Central instance (4 vCPUs, 16 GB memory) requires a hot add of 4 GB, and a large Prism
Central instance (8 vCPUs, 32 GB memory) requires a hot add of 8 GB. If you enable Nutanix
Flow, each Prism Central instance requires an extra hot-add of 1 GB