0% found this document useful (0 votes)
257 views41 pages

VPLEX - VPLEX Customer Procedures-Manage

Uploaded by

Asif Zahoor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
257 views41 pages

VPLEX - VPLEX Customer Procedures-Manage

Uploaded by

Asif Zahoor
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

VPLEX SolVe Generator

Solution for Validating your engagement

Topic
VPLEX Customer Procedures
Selections
Procedures: Manage
Management Procedures: Shutdown
Shutdown Procedures: VS2 Shutdown Procedures
Select a cluster shutdown/restart for each release: 6.0 and later
VPLEX VS2 hardware version: Both clusters
What type of VS2 hardware: Metro

Generated: September 30, 2021 10:27 AM GMT

REPORT PROBLEMS

If you find any errors in this procedure or have comments regarding this application, send email to
[email protected]

Copyright © 2021 Dell Inc. or its subsidiaries. All Rights Reserved.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION (“EMC”)
MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE
INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-
INFRINGEMENT AND ANY WARRANTY ARISING BY STATUTE, OPERATION OF LAW, COURSE OF
DEALING OR PERFORMANCE OR USAGE OF TRADE. IN NO EVENT SHALL EMC BE LIABLE FOR
ANY DAMAGES WHATSOEVER INCLUDING DIRECT, INDIRECT, INCIDENTAL, CONSEQUENTIAL,
LOSS OF BUSINESS PROFITS OR SPECIAL DAMAGES, EVEN IF EMC HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGES.

EMC believes the information in this publication is accurate as of its publication date. The information is
subject to change without notice. Use, copying, and distribution of any EMC software described in this
publication requires an applicable software license.

Dell, EMC, Dell EMC and other trademarks are trademarks of Dell Inc. or its subsidiaries. Other
trademarks may be the property of their respective owners.

Publication Date: September, 2021

version: 2.9.0.73

Page 1 of 41
Contents
Preliminary Activity Tasks .......................................................................................................4
Read, understand, and perform these tasks.................................................................................................4

Shut down/restart both clusters in a VS2 Metro......................................................................5


Order to shut down hosts, clusters, and other components..........................................................................5
Before you begin...........................................................................................................................................5
Additional docmentation ..........................................................................................................................5
About connecting to VPLEX management servers..................................................................................6
About login to the management server and VPlexcli ...............................................................................6
Phase 1: Shut down both clusters in a VS2 Metro..................................................................7
Task 1: Connect to the management server and login to the VPlexcli on cluster-1......................8
Task 2: Connect to the management server and login to the VPlexcli on cluster-2......................8
Task 3: Change the transfer size for all distributed-devices to 128K. ...........................................8
Task 4: Verify current data migration status..................................................................................9
Task 5: Stop the I/O on the hosts that are using VPLEX volumes on cluster-1 and cluster-2 ......9
Task 6: (If applicable) Login to the management server and VPlexcli on cluster-2 ....................10
Task 7: Check status of rebuilds, and wait for rebuilds to complete ...........................................10
Task 8: Verify VPLEX health.......................................................................................................10
Task 9: Collect diagnostics .........................................................................................................12
Task 10: (If applicable) Disable RecoverPoint consistency groups using VPLEX volumes..........13
Task 11: (If applicable) Power down RecoverPoint cluster...........................................................14
Task 12: Determine if battery conditioning is in progress on cluster-1 and cluster-2....................14
Task 13: Disable battery-conditioning on cluster-1 and cluster-2, if enabled................................15
Task 14: Disable call-home on cluster-1 and cluster-2, if enabled................................................16
Task 15: Determine the winner cluster for distributed consistency groups and distributed devices16
Task 16: Set the winner cluster for all distributed synchronous consistency groups ....................17
Task 17: Set the winner cluster for all distributed devices outside consistency groups................18
Task 18: (If applicable) Determine RecoverPoint enabled distributed consistency groups that
have a different detach-rule ...................................................................................................................19
Task 19: (If applicable) Disable VPLEX Witness ..........................................................................19
Task 20: Shut down the VPLEX firmware on the cluster that is not the winning cluster ...............20
Task 21: (If Applicable) Manually resume any suspended Recover Point enabled consistency
groups on the winning cluster ................................................................................................................22
Task 22: Shut down the VPLEX firmware on the remaining cluster..............................................22
Task 23: Shut down the VPLEX directors and optional COM switches on cluster-1 ....................23
Task 24: Shut down the VPLEX directors and optional COM switches on cluster-2 ....................25

version: 2.9.0.73

Page 2 of 41
Task 25: Shut down the management server on cluster-1 and cluster-2......................................27
Task 26: Shut down power to the VPLEX cabinet of cluster-1 and cluster-2................................27
Task 27: (If applicable) Exit the SSH sessions .............................................................................28
Task 28: (If applicable) Restore your laptop settings ....................................................................29
Task 29: (If applicable) Restore the default cabling arrangement.................................................29
Phase 2: Perform maintenance activities on cluster-1 and cluster-2 ....................................29
Phase 3: Restart cluster-1 and cluster-2...............................................................................30
Order to restart hosts, clusters, and other components..............................................................................30
Task 30: Bring up the VPLEX components on cluster-1 and cluster-2 .........................................31
Task 31: Connect to the management server on cluster-1 ...........................................................33
Task 32: Dual-engine or quad-engine clusters only: Verify COM switch health ...........................33
Task 33: (If applicable) Change VPLEX Witness Server IP address and/or management server
IP address of cluster-1 and/or cluster-2.................................................................................................35
Task 34: Verify the VPN connectivity ............................................................................................35
Task 35: (If applicable) Enable battery-conditioning on the SPS of cluster-1 and cluster-2 .........36
Task 36: (If applicable) Power up RecoverPoint cluster ...............................................................36
Task 37: (If applicable) Enable RecoverPoint consistency groups using VPLEX volumes...........36
Task 38: Verify the health of the clusters ......................................................................................37
Task 39: (If applicable) Resume volumes at cluster-1 and cluster-2 ............................................37
Task 40: (If applicable) Enable VPLEX Witness ...........................................................................37
Task 41: Check rebuild status and wait for rebuilds to complete ..................................................38
Task 42: (If applicable) Restore the original rule-sets for consistency groups..............................38
Task 43: (If applicable) Restore the original rule-sets for distributed devices...............................39
Task 44: (If applicable) Enable call-home .....................................................................................39
Task 45: Collect diagnostics .........................................................................................................40
Task 46: Exit the PuTTY sessions ................................................................................................40
Task 47: (If applicable) Restore the default cabling arrangement.................................................40
Task 48: Restore your service laptop settings ..............................................................................41
Task 49: Remount VPLEX volumes on hosts connected to cluster-1 and cluster-2 .....................41

version: 2.9.0.73

Page 3 of 41
Preliminary Activity Tasks
This section may contain tasks that you must complete before performing this procedure.

Read, understand, and perform these tasks


1. Table 1 lists tasks, cautions, warnings, notes, and/or knowledgebase (KB) solutions that you need to
be aware of before performing this activity. Read, understand, and when necessary perform any
tasks contained in this table and any tasks contained in any associated knowledgebase solution.

Table 1 List of cautions, warnings, notes, and/or KB solutions related to this activity

000171121: To provide feedback on the content of generated procedures

2. This is a link to the top trending service topics. These topics may or not be related to this activity.
This is merely a proactive attempt to make you aware of any KB articles that may be associated with
this product.

Note: There may not be any top trending service topics for this product at any given time.

VPLEX Top Service Topics

version: 2.9.0.73

Page 4 of 41
xxxShutdown_VS2_Both-clusters_Metro-A_60_later

Shut down/restart both clusters in a VS2 Metro


This procedure describes the tasks to shut down and restart both clusters in a VS2 VPLEX Metro.

Order to shut down hosts, clusters, and other components

CAUTION: During the cluster shutdown procedure before executing the shutdown command DO NOT
DISABLE the WAN COM on any of the VPLEX directors (by disabling one or more directors' WAN COM
ports, or disabling the external WAN COM links via the WAN COM switches). Disabling the WAN COM
before executing the 'cluster shutdown' command triggers the VPLEX failure recovery process for
volumes, which can result in the 'cluster shutdown' command hanging. Disabling the WAN COM before
the cluster shutdown has not been tested and is not supported.

CAUTION: If you are shutting down ALL the components in the SAN, shut down the components in the
following order:

1. Hosts connected to the VPLEX cluster.


This enables an orderly shutdown of all applications using VPLEX virtual storage.
2. RecoverPoint (if present in the configuration)
3. Components in the cluster’s cabinet, as described in this document
4. Storage arrays from which the cluster is getting the I/O disks and the meta-volume disks
5. Front-end and back-end Fibre Channel switches

Before you begin


Before you begin, confirm that you have the following information:
• IP address of the VPLEX management server for cluster-1 and cluster-2
• IP addresses of the hosts that are connected to cluster-1 and cluster-2
• (If applicable) IP addresses and login information for the RecoverPoint cluster(s) attached to cluster-1
or cluster-2
• All VPLEX login usernames and passwords.
Default usernames and passwords for the VPLEX management servers, VPlexcli, and Fibre
Channel COM switches are published in the EMC VPLEX Security Configuration Guide.

Note: The customer might have changed some usernames or passwords. You must ensure that
you know any changed passwords or that the customer is available when you need the changed
passwords.

Additional docmentation
The following VPLEX documents are available on EMC Support:
• EMC VPLEX CLI Guide
• EMC VPLEX Administration Guide

version: 2.9.0.73

Page 5 of 41
• EMC VPLEX Security Configuration Guide
The following RecoverPoint documents are available on EMC Support:
• RecoverPoint Deployment Manager 1.1 Product Guide
• VPLEX Technical Note
This document references procedures published in the generator. The following lists procedures and the
steps to access them in the generator:
• Change the management server address:
• Changing the Cluster Witness Server’s public IP address
• Configure 3-way VPN between Cluster Witness Server and VPLEX cluster

About connecting to VPLEX management servers


There are two options to connect to the management server:
• Use a remote host with access to management server’s public IP.
• Use a service laptop to connect to the management server’s service port.
Use the following procedure to connect to the management server through its service port:

1. [ ] The first step depends on whether the cabinet type is EMC or non-EMC:
• EMC cabinet:

a. At the front of the cabinet, remove the filler panel at the U23 position.
b. Slide the laptop tray out, and place your laptop on the tray.
c. Depress the tab that secures the red service cable to the tray, to loosen the cable tie.
d. Connect the cable to the Ethernet port on your laptop. (The other end is already connected to
the service port on the VPLEX management server as shown in Figure 1.)
• Non-EMC cabinet:

a. Locate the red service cable, hanging near the right rear of the cabinet.
b. Connect the free end of the cable to the Ethernet port on your laptop. (The other end is already
connected to the service port on the VPLEX management server as shown in Figure 1.)

Service port 1 Gb/s public Ethernet port


Customer
IP network

Customer-provided
Ethernet cable
VPLX-000407

Figure 1 Service port on VPLEX management server

About login to the management server and VPlexcli


Use PuTTY (version 0.60 or later) or a similar SSH client, connect to the public IP address of the
management server, and login as user service.

Note: Refer to the EMC VPLEX Security Configuration Guide (on EMC Support) for default passwords.

version: 2.9.0.73

Page 6 of 41
IMPORTANT: Verify that SSH or PuTTy is set to version 2.

In the example below, PuTTy is configured to use SSH version 2:

1. [ ] Open a session to the management server , and login with username service.
2. [ ] At the prompt, type vplexcli:
service@ManagementServer:~> vplexcli
Trying 0.0.0.0...
Connected to 0.
Escape character is '^]'.

A prompt for user name appears.


3. [ ] Login to the VPLEX CLI with username service and password.
Enter User Name: service
Password:
creating
logfile:/var/log/VPlex/cli/session.log_service_localhost_T18377_20100922142015

VPlexcli:/>

Phase 1: Shut down both clusters in a VS2 Metro

CAUTION: If any step you perform creates an error message or fails to give you the expected result,
consult the troubleshooting information available in generator, or contact the EMC Support Center. Do not
proceed until the issue has been resolved.

version: 2.9.0.73

Page 7 of 41
CAUTION: This document assumes that all the existing SAN components and the access to them from
VPLEX components have not changed as a part of the maintenance activity. If changes have been made,
please contact EMC Customer Support to plan this activity.

Task 1: Connect to the management server and login to the VPlexcli on cluster-1
1. [ ] Using PuTTY (version 0.60 or later) or a similar SSH client, connect to the public IP address of
the management server on cluster-1.
2. [ ] Login as user service.
3. [ ] At the system prompt, type vplexcli and login as user service.
Refer to About connecting to VPLEX management servers on Page 6 for the options and steps to
connect to the management sever.
Refer to About login to the management server and VPlexcli on Page 6 for the steps to login to the
CLI.

Task 2: Connect to the management server and login to the VPlexcli on cluster-2
1. [ ] Using PuTTY (version 0.60 or later) or a similar SSH client, connect to the public IP address of
the management server on cluster-2.
2. [ ] Login as user service.
3. [ ] A t the system prompt, type vplexcli and login as user service.
• Refer to About connecting to VPLEX management servers on Page 6 for the options and steps to
connect to the management server.
• Refer to About login to the management server and VPlexcli on Page 6 for the steps to login to the
CLI.

For the rest of this procedure:


Commands typed in the CLI session to cluster-1 are tagged with this icon: VP lexcli-1
Commands typed in the CLI session to cluster-2 are tagged with this icon: VP lexcli-2
Commands typed in the LINUX shell session to cluster-1 are tagged with: Linux shell-1
Commands typed in the LINUX shell session to cluster-2 are tagged with: Linux shell-2

Task 3: Change the transfer size for all distributed-devices to 128K.


1. [ ] VP lexcli-1 Type the ls –al command from the /distributed-storage/distributed-devices CLI
context to display the value for “Transfer size” for all distributed devices.
For more information on transfer-size, refer to Administration Guide’s section on About rebuilds.
VPlexcli:/distributed-storage/distributed-devices> ls –al

Name Status Operational Health Auto Rule Transfer


---------------------- ------- Status State Resume Set Size
---------------------- ------- ------------ ------- ------- Name --------
---------------------- ------- ------------ ------- ------- ---- --------
DR1_C1-C2_1gb_dev1 running ok ok true - 2M
DR1_C1-C2_1gb_dev10 running ok ok true - 2M
DR1_C1-C2_1gb_dev11 running ok ok true - 2M
.
.
.
The transfer size must be 128K or less.

version: 2.9.0.73

Page 8 of 41
If the transfer size is greater than 128K proceed to the next step.
If the transfer size is 128K or less, skip to Task 4:Verify current data migration status.
2. [ ] VP lexcli-1 This step varies depending on how many distributed devices have a transfer-size
greater than 128K:
• If all distributed devices have a transfer-size greater than 128K, type the following command to
change the transfer size for all devices:
VPlexcli:/distributed-storage/distributed-devices> set *::transfer-size 128K

Note: This command may take a few minutes to complete.

• If only some distributed devices have a transfer-size greater than 128K, type the following commands
to change the transfer-size for the specified distributed device:
cd <distributed_device_name>

set <distributed_device_name>:: transfer-size 128K

3. [ ] Type the ls –al command to verify that the transfer size value for all distributed-devices is 128K
or less.
VPlexcli:/distributed-storage/distributed-devices> ls –al

Task 4: Verify current data migration status

CAUTION: Any data migration job initiated on a cluster or between clusters will pause when the clusters
are shutdown and will resume when the clusters are restarted.

1. [ ] VP lexcli-1 Verify if any data migration is ongoing. Refer to the appropriate procedure described
in the VPLEX Administration Guide:
• ‘Monitor a migration’s progress’ for ‘One-time data migration’
• ‘Monitor a batch migration’s progress’ for ‘Batch migrations’
2. [ ] If any migrations are ongoing, do one of the following:
• If the data being migrated must be available on the target when the first cluster is shut down,
wait for the data migrations to complete before proceeding with this procedure.
• If the data being migrated does not need to be available, proceed to Task 5:Stop the I/O on the
hosts that are using VPLEX volumes on cluster-1 and cluster-2.

Task 5: Stop the I/O on the hosts that are using VPLEX volumes on cluster-1 and cluster-2
Note: This task requires access to the hosts accessing the storage through both clusters, including cross-
connected hosts. Coordinate with the host administrators if you do not have access to the hosts.

1. [ ] Login to each host connected to the VPLEX directors in cluster-1 and cluster-2, and stop the I/O
applications.
2. [ ] Depending on the supported methods of the host OS utilizing the VPLEX volumes, let the I/O
drain from the hosts by:
• Shut down the hosts, and/or

version: 2.9.0.73

Page 9 of 41
• Unmount the file systems

Task 6: (If applicable) Login to the management server and VPlexcli on cluster-2
The VPLexcli session to cluster-2 may have timed out.
Use PuTTY (version 0.60 or later) or a similar SSH client, to connect to the public IP address of the
management server on cluster-2, and login as user service.
Refer to About login to the management server and VPlexcli on Page 6 for the steps to login.

Task 7: Check status of rebuilds, and wait for rebuilds to complete

Note: Rebuilds may take some time to complete while I/O is in progress. For more information on
rebuilds, please check the Administration Guide -> Data migration->About rebuilds section.

1. [ ] VP lexcli-1 Type the rebuild status command and verify that all rebuilds on distributed devices
are complete before shutting down the clusters.
VPlexcli:/> rebuild status

If rebuilds are complete the command will report the following output:

Note: If migrations are ongoing, they are displayed under the rebuild status. Ignore the status of
migration jobs in the output.

Global rebuilds:
No active global rebuilds.
Local rebuilds:
No active local rebuilds

2. [ ] VP lexcli-2 Repeat Step 1. [ ] on cluster-2.

Task 8: Verify VPLEX health


1. [ ] VP lexcli-2 From the VPlexcli prompt on cluster-2, type the following command, and confirm that
the operational and health states appear as ok:
health-check

2. [ ] VP lexcli-1 Repeat Step 1. [ ] on cluster-1.


3. [ ] The next step varies depending on the number of engines in the clusters:
• If both clusters are single-engine configurations, skip to Task 9:Collect diagnostics.
• If either cluster is a dual-engine or quad-engine configuration, proceed to the following steps to
verify the health of the Fibre Channel COM switches:
a. At the VPlexcli prompt on the first (or only) multiple-engine cluster, type the following command:
exit

b. Linux shell-1 At the shell prompt, type the following command to connect to switch A:
telnet <switch_address>

where <switch_address> is identified in Table 1.

version: 2.9.0.73

Page 10 of 41
Table 1 Fibre Channel COM switch addresses

Switch Address in cluster-1 Address in cluster-2

A 128.221.252.34 128.221.252.66

B 128.221.253.34 128.221.253.66

c. Login with username service.

Note: Default usernames and passwords for the VPLEX management servers, VPlexcli, and Fibre
Channel COM switches are published in the EMC VPLEX Security Configuration Guide.

d. Type the following command in order to display the Fabric OS version:


version

Output example (partial):


FC-Switch-A:admin> version
Kernel: 2.6.14.2
Fabric OS: v6.3.2b
Made on: Wed Nov 10 23:50:28 2010
Flash: Wed Feb 1 19:40:37 2012
BootProm: 1.0.9

e. Switch interface Verify that all components are in a healthy state:


If the switch version is 7.4.2a or later, type this command:
mapsdb –show

Output example:
FC-Switch-A:service> mapsdb --show

1 Dashboard Information:
=======================

DB start time: Tue Sep 5 15:22:42 2017


Active policy: dflt_base_policy
Configured Notifications: None
Quarantined Ports : None

2 Switch Health Report:


=======================

Current Switch Policy Status: HEALTHY

3.1 Summary Report:


===================

Category |Today |Last 7 days |


--------------------------------------------------------------------------------
Fru Health |No Errors |No Errors |
Switch Resource |No Errors |No Errors |

3.2 Rules Affecting Health:


===========================

version: 2.9.0.73

Page 11 of 41
Category(Rule Count)|RepeatCount|Rule Name |Execution Time
|Object |Triggered Value(Units)|
--------------------------------------------------------------------------------
----------------------------------------

MAPS is not Licensed. MAPS extended features are available ONLY with License

If the switch version is earlier than 7.4.2a, type this command:


switchstatusshow

Output example:
Switch Health Report Report time: 01/18/2010 10:09:28 PM
Switch Name: FC-SWITCH-A
IP address: 128.221.252.34
SwitchState: HEALTHY
Duration: 123:10

Power supplies monitor HEALTHY


Temperatures monitor HEALTHY
Fans monitor HEALTHY
Flash monitor HEALTHY
Marginal ports monitor HEALTHY
Faulty ports monitor HEALTHY
Missing SFPs monitor HEALTHY
Fabric Watch is not licensed
Detailed port information is not included

f. Type the following command to terminate the switch session:


exit

g. Repeat Steps b through f on switch B.


h. From the Linux shell prompt, type the following command to connect to the VPlexcli:
vplexcli

i. Login as user service.


4. [ ] If both clusters are multiple-engine, repeat steps a through i for the other cluster.

Task 9: Collect diagnostics


1. [ ] VP lexcli-1 Type the following command to collect configuration information and log files from
all directors and the management server:
collect-diagnostics -–minimum

The information is collected, compressed in a zip file, and placed in the directory /diag/collect-
diagnostics-out on the management server.
2. [ ] After the log collection is complete, use FTP or SCP to transfer the logs from /diag/collect-
diagnostics-out to another machine.
3. [ ] If RecoverPoint is deployed on either or both VPLEX clusters, proceed to Task 10:(If applicable)
Disable RecoverPoint consistency groups using VPLEX volumes.
If RecoverPoint is NOT deployed on either VPLEX cluster, skip to Task 12:Determine if battery
conditioning is in progress on cluster-1 and cluster-2.

version: 2.9.0.73

Page 12 of 41
Task 10: (If applicable) Disable RecoverPoint consistency groups using VPLEX volumes

CAUTION: This task disrupts replication on volumes that are part of the RecoverPoint consistency
groups that are being disabled. Ensure that you perform this task on the correct RecoverPoint cluster and
RecoverPoint consistency group.

CAUTION: EMC does not support deployment of RecoverPoint on both VPLEX clusters in a Metro
configuration.

1. [ ] VP lexcli-1 Type the ll /recoverpoint/rpa-clusters/ command to display RP clusters attached to


the cluster-1 or cluster-2:
VPlexcli:/> ll /recoverpoint/rpa-clusters/

/recoverpoint/rpa-clusters:
RPA Host VPLEX Cluster RPA Site RPA ID RPA Version
----------- ------------- -------- ------ -----------
10.6.210.75 cluster-1 advil RPA 1 3.5(n.109)

2. [ ] Type the ll /recoverpoint/rpa-clusters/ip-address/volumes/ command where ip-address is the


RPA host address displayed in Step 1. [ ], to display the names of RecoverPoint consistency groups
using VPLEX volumes. For example:
VPlexcli:/> ll /recoverpoint/rpa-clusters/10.6.210.75/volumes/

/recoverpoint/rpa-clusters/10.6.210.75/volumes:
Name RPA RP Type RP Role RP VPLEX Group Capacity
------------------------------Site ----------- ---------- Group ---------------------
---------------------------- ----- ----------- ---------- ----- ---------------------
RP_Repo_Vol2_vol advil Repository - - RP_RepJournal 10G
demo_prodjournal_1_vol advil Journal - cg1 RP_RepJournal 5G
demo_prodjournal_2_vol advil Journal - cg1 RP_RepJournal 5G
demo_prodjournal_3_vol advil Journal - cg1 RP_RepJournal 5G
.
.
.
3. [ ] Login to the RecoverPoint GUI for each RecoverPoint cluster attached to cluster-1 or cluster-2.
4. [ ] Determine which RecoverPoint consistency groups are impacted by the shutdown of the VPLEX
clusters:
• Inspect the Splitter Properties associated with the VPLEX clusters.
• Compare the serial number of the VPLEX clusters with the Splitter Name in the RecoverPoint GUI.
5. [ ] Record the names of the consistency groups.

IMPORTANT: You will need this information to complete Task 37:(If applicable) Enable
RecoverPoint consistency groups using VPLEX volumes.

6. [ ] Disable each RecoverPoint consistency group associated with the VPLEX splitter on cluster-1 or
cluster-2.

version: 2.9.0.73

Page 13 of 41
Task 11: (If applicable) Power down RecoverPoint cluster

CAUTION: This step disrupts replication on all volumes that are replicated by the RecoverPoint cluster
that is being powered down. Ensure that you perform this task on the correct RecoverPoint cluster.

1. [ ] Shut down each RecoverPoint cluster that is using a VPLEX virtual volume from cluster-1 or
cluster-2 as its repository volume.
Refer to the RecoverPoint documentation for the procedures to shut down a RecoverPoint cluster.
2. [ ] Record the names of each RecoverPoint cluster that is shut down.

IMPORTANT: You will need this information to complete Task 36:(If applicable) Power up
RecoverPoint.

Task 12: Determine if battery conditioning is in progress on cluster-1 and cluster-2


1. [ ] VP lexcli-1 Type the following command to ensure that no engine on cluster-1 or cluster-2 has
battery conditioning in progress:
VPlexcli:/> battery-conditioning summary

Standby Power Supply Units

Cluster cluster-1

Owner Unit Enabled Manual Cycle Requested In Progress


Previous Result Previous Cycle Next Cycle Schedule
---------- ----------------------- ------- ---------------------- ----------- -------
-------- ---------------------------- ---------------------------- --------
engine-1-1 stand-by-power-supply-a true false false PASS
Mon Mar 05 00:05:03 UTC 2012 Thu Apr 05 00:00:00 UTC 2012 thursday
engine-1-1 stand-by-power-supply-b true false false PASS
Mon Feb 06 12:05:07 UTC 2012 Thu Apr 05 12:00:00 UTC 2012 thursday
engine-1-2 stand-by-power-supply-a true false false PASS
Mon Mar 12 00:05:06 UTC 2012 Thu Apr 12 00:00:00 UTC 2012 thursday
engine-1-2 stand-by-power-supply-b true false false PASS
Mon Mar 12 12:05:15 UTC 2012 Thu Apr 12 12:00:00 UTC 2012 thursday

Cluster cluster-2

Owner Unit Enabled Manual Cycle Requested In Progress


Previous Result Previous Cycle Next Cycle Schedule
---------- ----------------------- ------- ---------------------- ----------- -------
-------- ---------------------------- ---------------------------- --------
engine-2-1 stand-by-power-supply-a true false false PASS
Mon Mar 05 00:05:02 UTC 2012 Mon Apr 02 00:00:00 UTC 2012 monday
engine-2-1 stand-by-power-supply-b true false false PASS
Mon Feb 06 12:05:16 UTC 2012 Mon Apr 02 12:00:00 UTC 2012 monday
engine-2-2 stand-by-power-supply-a true false false PASS
Mon Mar 12 00:04:58 UTC 2012 Mon Apr 09 00:00:00 UTC 2012 monday
engine-2-2 stand-by-power-supply-b true false false PASS
Mon Mar 12 12:05:07 UTC 2012 Mon Apr 09 12:00:00 UTC 2012 monday

No units currently have conditioning cycles in progress.


Units engine-1-1|stand-by-power-supply-a, engine-2-1|stand-by-power-supply-a are
next to be cycled on Mon May 07 00:00:00 UTC 2012.

version: 2.9.0.73

Page 14 of 41
2. [ ] In the output, confirm that:
• No SPS unit on cluster-1 or cluster-2 is currently undergoing battery conditioning.
If a unit is conditioning, wait for it complete.
It takes a maximum of 5 minutes for conditioning to complete.
• No SPS units on cluster-1 or cluster-2 will begin a battery conditioning cycle before
conditioning is disabled in the next Task.
If a battery conditioning cycle is scheduled to begin before conditioning is disabled in the next
step, wait for battery conditioning to complete.
Use the date command to display the current time and date.
3. [ ] Make a note of all SPSs in cluster-1 and cluster-2 that have battery conditioning enabled.

IMPORTANT: You will need this information in Task 35:(If applicable) Enable battery-conditioning
on the SPS of cluster-1 and cluster-2.

Task 13: Disable battery-conditioning on cluster-1 and cluster-2, if enabled


1. [ ] VP lexcli-1 Type the following command to disable battery conditioning on all SPS units in
cluster-1 and cluster-2:
VPlexcli:/> battery-conditioning disable -s /engines/*/stand-by-power-supplies/*

Battery conditioning disabled on backup battery units 'engine-1-1|stand-bypower-


supply-a, engine-1-1|stand-by-power-supply-b, engine-1-2|stand-by-powersupply-
a, engine-1-2|stand-by-power-supply-b, engine-2-1|stand-bypower-
supply-a, engine-2-1|stand-by-power-supply-b, engine-2-2|stand-by-powersupply-
a, engine-2-2|stand-by-power-supply-b'”

2. [ ] VP lexcli-1 Type the battery-conditioning summary command to verify that battery conditioning
is disabled on all SPS units in the cluster.
VPlexcli:/> battery-conditioning summary

Standby Power Supply Units

Cluster cluster-1

Owner Unit Enabled Manual Cycle Requested In Progress


Previous Result Previous Cycle Next Cycle Schedule
---------- ----------------------- ------- ---------------------- ----------- -------
-------- ---------------------------- ---------------------------- --------
engine-1-1 stand-by-power-supply-a false false false PASS
Mon Mar 05 00:05:03 UTC 2012 Thu Apr 05 00:00:00 UTC 2012 thursday
engine-1-1 stand-by-power-supply-b false false false PASS
Mon Feb 06 12:05:07 UTC 2012 Thu Apr 05 12:00:00 UTC 2012 thursday
engine-1-2 stand-by-power-supply-a false false false PASS
Mon Mar 12 00:05:06 UTC 2012 Thu Apr 12 00:00:00 UTC 2012 thursday
engine-1-2 stand-by-power-supply-b false false false PASS
Mon Mar 12 12:05:15 UTC 2012 Thu Apr 12 12:00:00 UTC 2012 thursday

Cluster cluster-2

Owner Unit Enabled Manual Cycle Requested In Progress


Previous Result Previous Cycle Next Cycle Schedule
---------- ----------------------- ------- ---------------------- ----------- -------
-------- ---------------------------- ---------------------------- --------

version: 2.9.0.73

Page 15 of 41
engine-2-1 stand-by-power-supply-a false false false PASS
Mon Mar 05 00:05:02 UTC 2012 Mon Apr 02 00:00:00 UTC 2012 monday
engine-2-1 stand-by-power-supply-b false false false PASS
Mon Feb 06 12:05:16 UTC 2012 Mon Apr 02 12:00:00 UTC 2012 monday
engine-2-2 stand-by-power-supply-a false false false PASS
Mon Mar 12 00:04:58 UTC 2012 Mon Apr 09 00:00:00 UTC 2012 monday
engine-2-2 stand-by-power-supply-b false false false PASS
Mon Mar 12 12:05:07 UTC 2012 Mon Apr 09 12:00:00 UTC 2012 monday

No units currently have conditioning cycles in progress.


Units engine-1-1|stand-by-power-supply-a, engine-2-1|stand-by-power-supply-a are
next to be cycled on Mon May 07 00:00:00 UTC 2012.

Task 14: Disable call-home on cluster-1 and cluster-2, if enabled


1. [ ] VP lexcli-1 From the VPlexcli prompt, type the following commands to determine if call home is
enabled:
cd /notifications/call-home

ls

Output example if call-home is enabled:


Attributes:
Name Value
------- -----
enabled true

2. [ ] Record whether call-home is enabled or disabled.

IMPORTANT: You will need this information for Task 44:(If applicable) Enable call-home.

3. [ ] VP lexcli-1 Type the following commands to disable call-home, list its status, and confirm that it
is disabled:
set enabled false --force

ls

Output example if call-home is disabled:


Attributes:
Name Value
------- -----
enabled false

4. [ ] VP lexcli-2 Repeat Steps 1. [ ] through 3. [ ] on cluster-2.

Task 15: Determine the winner cluster for distributed consistency groups and distributed
devices
Note: Include MetroPoint consistency groups while performing this task. Do not include RecoverPoint
consistency groups as they are addressed in Task 18:(If applicable) Determine RecoverPoint enabled
distributed consistency groups that have a different detach-rule.

1. [ ] VP lexcli-1 From the VPlexcli prompt on cluster-1 or cluster-2, type the following commands to
display the consistency groups:

version: 2.9.0.73

Page 16 of 41
For example,
VPlexcli:/> cd /clusters/cluster-1/consistency-groups
ll
2. [ ] VP lexcli-1 From the VPlexcli prompt on cluster-1 or cluster-2, type the following commands to
display the distributed devices:
cd /distributed-storage/distributed-devices
ll
3. [ ] Determine if one cluster is the winner for most of the consistency groups and distributed devices,
and record the cluster name, because this cluster is shut down last.

Note: The winner cluster is referenced later, in other tasks.

4. [ ] If the winner is equally divided among the clusters, select one cluster and record its name, as
one cluster must be chosen to be shut down last.
5. [ ] Identify the consistency groups having a detach-rule different from the cluster recorded in steps 3
or 4, and record the consistency group names in the following table:

Table 2

Consistency groups with cluster other than the cluster determined in step 3 or 4 as winner or no-
automatic-winner

Consistency Group Name Detach Rule

IMPORTANT: The information in Table 2 is required when you reset the rule-set name in Task 42:(If
applicable) Restore the original rule-sets for consistency groups.

Task 16: Set the winner cluster for all distributed synchronous consistency groups
Note: Include MetroPoint consistency groups while performing this task. Do not include RecoverPoint
consistency groups as they are addressed in Task 18:(If applicable) Determine RecoverPoint enabled
distributed consistency groups that have a different detach-rule.

1. [ ] VP lexcli-1 You must make the cluster that is recorded in the previous task as the winner for
consistency groups listed in Table 2, so that the last cluster to be shut down is set as the winner for
all consistency groups.
2. [ ] Type the following commands, where consistency-group_name is the name of a consistency
group in Table 2 and delay is the current delay.
cd <consistency-group_name>
set-detach-rule winner <cluster noted down from Task 15:> –-delay
<delay>
cd ..

version: 2.9.0.73

Page 17 of 41
3. [ ] Repeat Step 1 for every consistency group listed in Table 2.
4. [ ] To verify the rule-set name change, type the following command:
ll /clusters/cluster-1/consistency-groups/

5. [ ] In the output, confirm that all the consistency groups show the correct winner cluster.

Task 17: Set the winner cluster for all distributed devices outside consistency groups
1. [ ] VP lexcli-1 Type the following commands to display the distributed devices:
cd /distributed-storage/distributed-devices
ll
2. [ ] Record the name and rule-set of all distributed devices, where the rule-set configures any cluster
other than the winner cluster selected in Task 15:Determine the winner cluster for distributed
consistency groups and distributed devices.

Table 3

List of distributed devices having rule-set other than the winner cluster

Distributed Device Name Rule-set Name

IMPORTANT: The information is required when you reset the rule-set in Task 42:(If applicable) Restore
the original rule-sets for consistency groups.

3. [ ] VP lexcli-1 Change the rule-set for distributed devices:

Note: You can change the rule-set for all distributed devices, or for selected distributed devices.

To change the rule-set for all distributed devices, type the following command:
set *::rule-set-name <cluster noted down from Task 15:>-detaches
To change the rule-set for selected distributed devices, type the following command for each
distributed_device_name listed in Table 3:
cd <distributed_device_name>

set rule-set-name <cluster noted down from Task 15:>-detaches

cd ..

4. [ ] To verify the rule-set name change, type the following command:


ll /distributed-storage/distributed-devices
5. [ ] In the output, confirm that all distributed devices show the correct winner cluster.

version: 2.9.0.73

Page 18 of 41
Task 18: (If applicable) Determine RecoverPoint enabled distributed consistency groups that
have a different detach-rule
Note: Skip this task for a RecoverPoint enabled consistency group that belongs to MetroPoint
consistency groups.

1. [ ] VP lexcli-1 From the VPlexcli prompt, type the following commands to display the consistency
groups that are RecoverPoint enabled:
VPlexcli:/> ls -p /clusters/cluster-1/consistency-groups/$d where $d::recoverpoint-
enabled \== true
/clusters/cluster-1/consistency-groups/Aleve_RPC1_Local_Journal_A:
Attributes:
Name Value
-------------------- ---------------------------------------------------------
active-clusters []
cache-mode synchronous
detach-rule winner cluster-1 after 5s
operational-status [(cluster-1,{ summary:: ok, details:: [] }), (cluster-2,{
summary:: ok, details:: [] })]
passive-clusters []
recoverpoint-enabled true
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes Aleve_RPC1_local_Journal_A_0000_vol,
Aleve_RPC1_local_Journal_A_0001_vol,
Aleve_RPC1_local_Journal_A_0002_vol,
Aleve_RPC1_local_Journal_A_0003_vol,
Aleve_RPC1_local_Journal_A_0004_vol,
Aleve_RPC1_local_Journal_A_0005_vol,
Aleve_RPC1_local_Journal_A_0006_vol,
Aleve_RPC1_local_Journal_A_0007_vol,
Aleve_RPC1_local_Journal_A_0008_vol,
Aleve_RPC1_local_Journal_A_0009_vol, ... (45 total)
visibility [cluster-1, cluster-2]

Contexts:
advanced recoverpoint
.
.
.
2. [ ] List the names of all the RecoverPoint enabled distributed consistency groups that have a
detach-rule that is different from the winner cluster selected in Task 15.

IMPORTANT: The information is required to manually resume the consistency group on the winner
cluster in Task 21:

Task 19: (If applicable) Disable VPLEX Witness


1. [ ] VP lexcli-1 From the VPlexcli prompt, type the following commands to determine if VPLEX
Witness is enabled:
cd /cluster-witness
ls

Output example when VPLEX Witness is enabled:


VPlexcli:/cluster-witness> ls
Attributes:
Name Value
------------- -------------

version: 2.9.0.73

Page 19 of 41
admin-state enabled
private-ip-address 128.221.254.3
public-ip-address 10.31.25.45

Contexts:
Components

2. [ ] Record whether VPLEX Witness is enabled or disabled.

IMPORTANT: You will need this information for Task 40:(If applicable) Enable VPLEX Witness.

3. [ ] VP lexcli-1 If VPLEX Witness is enabled, type the following command to disable it:
cluster-witness disable –-force

4. [ ] Type the following command to verify that VPLEX Witness is disabled:


ls

Output example if VPLEX Witness is disabled:


VPlexcli:/cluster-witness> ls
Attributes:
Name Value
------------- -------------
admin-state disabled
private-ip-address 128.221.254.3
public-ip-address 10.31.25.45

Contexts:
Components

Task 20: Shut down the VPLEX firmware on the cluster that is not the winning cluster

CAUTION: During the cluster shutdown procedure before executing the shutdown command DO NOT
DISABLE the WAN COM on any of the VPLEX directors (by disabling one or more directors' WAN COM
ports, or disabling the external WAN COM links via the WAN COM switches). Disabling the WAN COM
before executing the 'cluster shutdown' command triggers the VPLEX failure recovery process for
volumes, which can result in the 'cluster shutdown' command hanging. Disabling the WAN COM before
the cluster shutdown has not been tested and is not supported.

CAUTION: Ensure that the shutdown command is run in the CLI window for the cluster that is intended
to be shut down.

1. [ ] VP lexcli-1 Assume, the cluster other than the cluster noted down in Task 15: is cluster-1, then
type the following commands and type yes at the prompt to shut down the firmware in cluster-
1(cluster other than cluster noted down in Task 15:.
cd /clusters/cluster-1
cluster shutdown

For example:
VPlexcli:/clusters/cluster-1> cluster shutdown
Warning: Shutting down a VPlex cluster may cause data unavailability. Please
refer to the VPlex documentation for the recommended procedure for shutting
down a cluster. To show that you understand the impact, enter 'shutdown':
shutdown

version: 2.9.0.73

Page 20 of 41
You have chosen to shutdown 'cluster-1'. To confirm, enter 'cluster-1':
cluster-1

Status Description
-------- -----------------
Started. Shutdown started.

If the cluster shutdown command gets stuck, contact EMC Customer Support for assistance.

2. [ ] Wait for 3 – 5 minutes for the shutdown to complete.


3. [ ] VP lexcli-1 Type the following commands to display the cluster status:
cd /clusters
cluster status

4. [ ] Assume if the cluster other than the cluster noted down from Task 15: is cluster-1. In the output,
confirm that the operational-status for cluster-1 will be not-running.
VPlexcli:/> cluster status

Cluster cluster-1
operational-status: not-running
transitioning-indications:
transitioning-progress:
health-state: unknown
health-indications:
local-com: failed to validate local-com: Firmware command
error.
communication error recently.

Cluster cluster-2
operational-status: degraded
transitioning-indications: suspended exports,suspended volumes
transitioning-progress:
health-state: minor-failure
health-indications: 37 suspended Devices
6 unhealthy Devices or storage-volumes
storage-volume unreachable
local-com: ok

5. [ ] VP lexcli-1 Type the following command to display the cluster summary:


cluster summary

Output Summary:

Figure 2

version: 2.9.0.73

Page 21 of 41
6. [ ] In the output, confirm that the cluster other than the cluster noted down from Task 15: is down. If
that cluster is down, Connected will be false, Expelled, Operational Status, and Health state will be –
and the cluster noted down from Task 15: will be in its own island.

Task 21: (If Applicable) Manually resume any suspended Recover Point enabled consistency
groups on the winning cluster
1. [ ] VP lexcli-1 If the RecoverPoint consistency groups are identified in Task 18: then execute the
following CLI commands for each of those consistency-group to make cluster noted down in Task 15:
as the winner:
cd /clusters/cluster-1/consistency-groups
consistency-group choose-winner -c <cluster noted down from Task 15> -g consistency
group

VPlexcli:/clusters/cluster-1/consistency-groups> choose-winner -c < cluster noted


down from Task 15> -g async_sC12_vC2_aCW_CHM

WARNING: This can cause data divergence and lead to data loss. Ensure the other
cluster is not serving I/O for this consistency group before continuing. Continue?
(Yes/No) Yes

Type the following command to ensure none of the above consistency groups require resumption:
consistency-group summary

Look for consistency groups with requires-resume-at-loser.

Task 22: Shut down the VPLEX firmware on the remaining cluster

CAUTION: During cluster shutdown procedure DO NOT DISABLE wan-com ports on any of the VPLEX
Directors. Disabling wan-com ports on VPLEX Directors before 'cluster shutdown' command execution
will trigger failure recovery process for DR volumes and logging volumes which can result in 'cluster
shutdown' command to get hung and intern result in Stuck-IOs.

1. [ ] Repeat Steps 1. [ ] to 3. [ ] mentioned in Task 20: for the cluster noted down from Task 15:.
Assuming Cluster-2 was the cluster noted down from Task 15: type the following command:
cd /clusters/cluster-2
cluster shutdown
For example:
VPlexcli:/clusters/cluster-2> cluster shutdown
Warning: Shutting down a VPlex cluster may cause data unavailability. Please refer
to the VPlex documentation for the recommended procedure for shutting down a
cluster. To show that you understand the impact, enter 'shutdown': shutdown

You have chosen to shutdown 'cluster-2'. To confirm, enter 'cluster-2': cluster-2

Status Description
-------- -----------------
Started. Shutdown started.

version: 2.9.0.73

Page 22 of 41
If the cluster shutdown command is stuck, contact EMC Customer Support for assistance.
2. [ ] Confirm the output for this cluster also is not- running. If the cluster has not shutdown, please
contact EMC Customer Support for assistance.
3. [ ] Type the following command to display the cluster summary:
cluster summary

Figure 3

Task 23: Shut down the VPLEX directors and optional COM switches on cluster-1
CAUTION: Ensure that you are at the LINUX shell prompt for the cluster that you want to shut down.

1. [ ] VP lexcli-1 From the VPlexcli prompt, type the following command:


exit

2. [ ] Linux shell-1 From the shell prompt, type the following commands to shut down director 1-1-A:

Note: In the first command, the l in –l is a lowercase L.

ssh -l root 128.221.252.35

shutdown –P “now”

director-1-1-a:~ # shutdown -P "now"

Broadcast message from root (pts/0) (Fri Nov 18 20:04:33 2011):

The system is going down to maintenance mode NOW!

3. [ ] Linux shell-1 Repeat Step 2. [ ] for each remaining director in cluster 1, substituting the
applicable ssh command shown in the following table:

Table 4 ssh commands to connect to directors

Cluster size Director ssh command Checkbox

Single-engine 1-1-A ssh –l root 128.221.252.35 [X]


Dual-engine
1-1-B ssh –l root 128.221.252.36 [ ]
Quad-engine

Dual-engine 1-2-A ssh –l root 128.221.252.37 [ ]


Quad-engine
1-2-B ssh –l root 128.221.252.38 [ ]

version: 2.9.0.73

Page 23 of 41
Quad-engine 1-3-A ssh –l root 128.221.252.39 [ ]

1-3-B ssh –l root 128.221.252.40 [ ]

1-4-A ssh –l root 128.221.252.41 [ ]

1-4-B ssh –l root 128.221.252.42 [ ]

4. [ ] Linux shell-1 Type the following command, and verify that director 1-1-A is down:
ping –b 128.221.252.35

Note: A director can take up to four minutes to shut down completely.

Output example if the director is down:


PING 128.221.252.35 (128.221.252.35) 56(84) bytes of data.
From 128.221.252.33 icmp_seq=1 Destination Host Unreachable
From 128.221.252.33 icmp_seq=2 Destination Host Unreachable

5. [ ] Linux shell-1 Repeat Step 4. [ ] for each remaining director, substituting the applicable IP
address shown in Step 3. [ ].
6. [ ] The next step varies depending on the number of engines in the cluster:

• If the cluster has only one engine, skip to Task 24:Shut down the VPLEX directors and optional
COM switches on cluster-2
• If the cluster has multiple engines, proceed to the following steps to shut down the Fibre Channel
COM switches:
a. Linux shell-1 Type the following command to connect to switch A:
telnet 128.221.252.34

b. Login with username service.


c. switch interface On the switch’s command line, type the following command to shut down the
switch:
sysshutdown

-----------------------------------------------------------------
FC-Switch-A:service> sysshutdown
This command will shutdown the operating systems on your switch.
You are required to power-cycle the switch in order to restore operation.
Are you sure you want to shutdown the switch [y/n]?y

d. Type y and press Enter at the prompt.

Broadcast message from root (pts/0) Fri Nov 18 20:09:11 2011...

The system is going down for system halt NOW !!


FC-Switch-A:service> Connection closed by foreign host.

e. switch interface Type the following command to terminate the switch session:
exit

version: 2.9.0.73

Page 24 of 41
f. Linux shell-1 Repeat Steps a through e for switch B, substituting 128.221.253.34 for the address
in Step a.
g. Linux shell-1 Type the following command to ping switch A:
ping –b 128.221.252.34

h. In the output, confirm that switch A is down.


Output example if the switch is down:
PING 128.221.252.34 (128.221.252.34) 56(84) bytes of data.
From 128.221.252.33 icmp_seq=1 Destination Host Unreachable
From 128.221.252.33 icmp_seq=2 Destination Host Unreachable

i. Linux shell-1 Type the following command to ping switch B:


ping –b 128.221.253.34

j. In the output, confirm that switch B is down.

Task 24: Shut down the VPLEX directors and optional COM switches on cluster-2

CAUTION: Ensure that you are at the LINUX shell prompt for the cluster that you want to shut down.

1. [ ] VP lexcli-2 From the VPlexcli prompt, type the following command:


exit

2. [ ] Linux shell-2 From the shell prompt, type the following commands to shut down director 2-1-A:

Note: In the first command, the l in –l is a lowercase L.

ssh -l root 128.221.252.67

shutdown –P “now”

director-2-1-a:~ # shutdown -P "now"

Broadcast message from root (pts/0) (Fri Nov 18 20:04:33 2011):

The system is going down to maintenance mode NOW!

3. [ ] Linux shell-2 Repeat Step 2. [ ] for each remaining director in cluster 2, substituting the
applicable ssh command shown in the following table:

Table 5 ssh commands to connect to directors

Cluster size Director ssh command Checkbox

Single-engine 2-1-A ssh –l root 128.221.252.67 [X]


Dual-engine
2-1-B ssh –l root 128.221.252.68 [ ]
Quad-engine

Dual-engine 2-2-A ssh –l root 128.221.252.69 [ ]


Quad-engine
2-2-B ssh –l root 128.221.252.70 [ ]

version: 2.9.0.73

Page 25 of 41
Quad-engine 2-3-A ssh –l root 128.221.252.71 [ ]

2-3-B ssh –l root 128.221.252.72 [ ]

2-4-A ssh –l root 128.221.252.73 [ ]

2-4-B ssh –l root 128.221.252.74 [ ]

4. [ ] Linux shell-2 Type the following command, and verify that director 2-1-A is down:
ping –b 128.221.252.67

Note: A director can take up to four minutes to shut down completely.

Output example if the director is down:


PING 128.221.252.67 (128.221.252.67) 56(84) bytes of data.
From 128.221.252.65 icmp_seq=1 Destination Host Unreachable
From 128.221.252.65 icmp_seq=2 Destination Host Unreachable

5. [ ] Linux shell-2 Repeat Step 4. [ ] for each remaining director, substituting the applicable IP
address shown in Step 3. [ ].
6. [ ] The next step varies depending on the number of engines in the cluster:

• If the cluster has only one engine, skip to Task 25:Shut down the management server on cluster-1
and cluster-2”.
• If the cluster has multiple engines, proceed to the following steps to shut down the Fibre Channel
COM switches:
a. Linux shell-2 Type the following command to connect to switch A:
telnet 128.221.252.66

b. Login with username service.


c. switch interface On the switch’s command line, type the following command to shut down the
switch:
sysshutdown

-----------------------------------------------------------------
FC-Switch-A:service> sysshutdown
This command will shutdown the operating systems on your switch.
You are required to power-cycle the switch in order to restore operation.
Are you sure you want to shutdown the switch [y/n]?y

d. Type y and press Enter at the prompt.

Broadcast message from root (pts/0) Fri Nov 18 20:09:11 2011...

The system is going down for system halt NOW !!


FC-Switch-A:service> Connection closed by foreign host.

e. switch interface Type the following command to terminate the switch session:
exit

version: 2.9.0.73

Page 26 of 41
f. Linux shell-2 Repeat Steps a through e for switch B, substituting 128.221.253.66 for the address
in Step a.
g. Linux shell-2 Type the following command to ping switch A:
ping –b 128.221.252.66

h. In the output, confirm that switch A is down.


Output example if the switch is down:
PING 128.221.252.66 (128.221.252.66) 56(84) bytes of data.
From 128.221.252.65 icmp_seq=1 Destination Host Unreachable
From 128.221.252.65 icmp_seq=2 Destination Host Unreachable

i. Linux shell-2 Type the following command to ping switch switch B:


ping –b 128.221.253.66

j. In the output, confirm that switch B is down.

Task 25: Shut down the management server on cluster-1 and cluster-2

CAUTION: Shut down the management server for cluster-2 before shutting down the management
server on cluster-1.

1. [ ] Linux shell-2 Type the following command to shut down the management server on cluster-2:
sudo /sbin/shutdown 0

Broadcast message from root (pts/1) (Tue Feb 8 18:12:30 2010):

The system is going down to maintenance mode NOW!

2. [ ] Linux shell-1 Repeat Step 1. [ ] on the cluster-1 management server.

Task 26: Shut down power to the VPLEX cabinet of cluster-1 and cluster-2
1. [ ] Switch both PDP power switches (shown in Figure 4) in cluster-1 and cluster-2 OFF.

version: 2.9.0.73

Page 27 of 41
ON ON
I I
O O
OFF OFF

OF F OF F
O O
I I
ON ON

PDP power PDP power


OFF OF F
O O
I I
ON ON

switch switch
OFF OF F
O O
I I
ON ON

To 30 A, 220 VAC To 30 A, 220 VAC


power source 2 power source 1 VPLX-000082

Figure 4 Cabinet PDP power switches

2. [ ] The SPS continues to supply power to the engines for several minutes.
When all LEDs are off, the cluster shutdown is complete.
The management server LEDs are also off.
3. [ ] Remove the faceplate in front of the management server, and confirm that the Power LED
(shown in the following Figure) is off.

Figure 5 Management server LEDs

4. [ ] The following step varies depending on when you intend to re-start the cluster:
• If you intend to bring up the cluster within a day, set the faceplate aside.
• If the cluster will be down for an extended time, reinstall the bezel.

Task 27: (If applicable) Exit the SSH sessions


VPlexcli may have automatically disconnected. If it has not disconnected, perform this step to exit from
VPlexcli and the LINUX shell.

version: 2.9.0.73

Page 28 of 41
1. [ ] VP lexcli-1 VP lexcli-2 If you are still connected, type the following command on both clusters
to exit the VPlexcli:
exit

2. [ ] Linux shell-1 Linux shell-2 From the shell prompt, type the following command on both clusters
to exit the shell session:
exit

Task 28: (If applicable) Restore your laptop settings


1. [ ] If you changed or disabled any settings on your laptop before starting this procedure, restore the
settings.

Task 29: (If applicable) Restore the default cabling arrangement


If you used a service laptop to access the management server, use the steps in this task to restore the
default cable arrangement.
The steps to restore the cabling vary depending on whether VPLEX is installed in an EMC cabinet or non-
EMC cabinet:
• EMC cabinet:

a. Disconnect the red service cable from the Ethernet port on your laptop, and remove your laptop
from the laptop tray.
b. Slide the cable back through the cable tie until only one or two inches protrude through the tie, and
then tighten the cable tie.
c. Slide the laptop tray back into the cabinet.
d. Replace the filler panel at the U23 position.
e. If you used the cabinet’s spare Velcro straps to secure any cables out of the way temporarily,
return the straps to the cabinet.
• Non-EMC cabinet:

a. Disconnect the red service cable from your laptop.


b. Coil the cable into a loose loop and hang it in the cabinet. (Leave the other end connected to the
VPLEX management server.)

Phase 2: Perform maintenance activities on cluster-1 and cluster-


2

CAUTION: This document assumes that all the existing SAN components and the access to them from
VPLEX components have not changed as a part of the maintenance activity. If changes have been made,
please contact EMC Customer Support to plan this activity.

Perform the activity that required the shutdown of cluster-1 and cluster-2.

version: 2.9.0.73

Page 29 of 41
Phase 3: Restart cluster-1 and cluster-2
This procedure describes the tasks to restart both clusters in a VS2 VPLEX Metro after the clusters have
been shut down.
This procedure assumes that the clusters were shut down following the tasks described earlier in this
document, and that no component power switches or PDU circuit breakers were switched OFF.

Order to restart hosts, clusters, and other components

CAUTION: If you are bringing up ALL the components in the SAN, bring them up in the following order
as mentioned below. While you are bringing up all the components in that order, ensure that the previous
component is fully up and running before proceeding with next component. There has to be a time (20
seconds or more) gap before starting each component.

SAN components:
1. Storage arrays from which VPLEX is getting the I/O disks and the meta-volume disks.
2. Front-end and back-end Fibre Channel switches.
VPLEX components:
3. Components in the VPLEX cabinet, as described in this document.
4. (If applicable) RecoverPoint
5. Hosts connected to the VPLEX cluster.

version: 2.9.0.73

Page 30 of 41
Task 30: Bring up the VPLEX components on cluster-1 and cluster-2
1. [ ] Switch both lower PDPs (shown in Figure 6) in cluster-1 and cluster-2 ON.

Notes:
The upper PDUs are not used in a
single-engine configuration.
ON
I
O
OFF
ON
I
O
OFF
The upper PDUs are installed
upside-down from the lower PDUs.

ON ON
I I
O O
OFF OFF

ON ON
I I
O O
OFF OFF

PDU circuit
breakers

OF F OF F
O O
I I
ON ON

PDP power PDP power


OFF OF F
O O
I I
ON ON

switch switch
OFF OF F
O O
I I
ON ON

To 30 A, 220 VAC To 30 A, 220 VAC


power source 2 power source 1 VPLX-000475

Figure 6 Cabinet PDP and PDU power switches

2. [ ] Verify that the PDU circuit breakers are ON for all receptacle groups that have power cables
connected to them.
3. [ ] Verify that the LED status on each SPS module is as shown in Figure 7.

version: 2.9.0.73

Page 31 of 41
SPS LEDs:
On-line Enabled (LED on) or On-line Charging (LED flashing)
On-Battery
Replace Battery
Internal Check VPLX-000419

Figure 7 SPS LEDs

4. [ ] Verify that the green Power LED is as shown in Figure 8.

Note: It may take 5-6 minutes for the LEDs to change to green.

On
Off
Off

VPLX-000260

Figure 8 LEDs on engine

5. [ ] Dual-engine or quad-engine cluster only: Verify that the Online LED on each UPS (shown in
Figure 9) is illuminated (green), and that none of the other three LEDs on the UPS is illuminated.
If the Online LED on a UPS is not illuminated, push the UPS power button, and verify that the LEDs
are as described above before proceeding to the next step.

Online LED Overload LED


UPS: Front view

On battery LED Replace battery LED


Power button
UPS: Rear view

Circuit breakers VPLX-000121

version: 2.9.0.73

Page 32 of 41
Figure 9 UPS, front view

6. [ ] Dual-engine or quad-engine cluster only: Verify that no UPS circuit breaker has triggered. If
either circuit breaker on a UPS has triggered, press it to reseat it.
7. [ ] If the faceplate in front of the management server is installed, remove it.
8. [ ] On the front of the management server (Figure 10), verify that the power LED is illuminated. If
the LED is not on, press the power button.

Power button Power LED VPLX-000083

Figure 10 Management server power button and LEDs

9. [ ] Wait for 10 minutes for cluster-2 to complete booting.


10. [ ] Repeat Steps 1. [ ] through 8. [ ] on cluster-1.

Task 31: Connect to the management server on cluster-1

CAUTION: If any step you perform creates an error message or fails to give you the expected result,
consult the troubleshooting information in the generator, or contact the EMC Support Center. Do not
proceed until the issue has been resolved.

1. [ ] Using PuTTY (version 0.60 or later) or a similar SSH client, connect to the public IP address of
the management server on cluster-2, and login as user service.
Refer to About connecting to VPLEX management servers on page 6 for the options and steps to
connect to the management serve.

Task 32: Dual-engine or quad-engine clusters only: Verify COM switch health
If the cluster is dual-engine or quad-engine, verify the health of the Fibre Channel COM switches as
follows:

1. [ ] Linux shell-1 At the shell prompt, type the following command to connect to switch A:
telnet <switch_address>

where <switch_address> is identified in Table 1.

Table 6 Fibre Channel COM switch addresses

Switch Address in cluster-1 Address in cluster-2

A 128.221.252.34 128.221.252.66

B 128.221.253.34 128.221.253.66

2. [ ] Login with username service.

version: 2.9.0.73

Page 33 of 41
3. [ ] Type the following command in order to display the Fabric OS version:
version

Output example (partial):


FC-Switch-A:admin> version
Kernel: 2.6.14.2
Fabric OS: v6.3.2b
Made on: Wed Nov 10 23:50:28 2010
Flash: Wed Feb 1 19:40:37 2012
BootProm: 1.0.9

4. [ ] Switch interface Verify that all components are in a healthy state:

If the switch version is 7.4.2a or later, type this command:


mapsdb –show

Output example:
FC-Switch-A:service> mapsdb --show

1 Dashboard Information:
=======================

DB start time: Tue Sep 5 15:22:42 2017


Active policy: dflt_base_policy
Configured Notifications: None
Quarantined Ports : None

2 Switch Health Report:


=======================

Current Switch Policy Status: HEALTHY

3.1 Summary Report:


===================

Category |Today |Last 7 days |


--------------------------------------------------------------------------------
Fru Health |No Errors |No Errors |
Switch Resource |No Errors |No Errors |

3.2 Rules Affecting Health:


===========================

Category(Rule Count)|RepeatCount|Rule Name |Execution Time


|Object |Triggered Value(Units)|
--------------------------------------------------------------------------------
----------------------------------------

MAPS is not Licensed. MAPS extended features are available ONLY with License

If the switch version is earlier than 7.4.2a, type this command:


switchstatusshow

Output example:

version: 2.9.0.73

Page 34 of 41
Switch Health Report Report time: 01/18/2010 10:09:28 PM
Switch Name: FC-SWITCH-A
IP address: 128.221.252.34
SwitchState: HEALTHY
Duration: 123:10

Power supplies monitor HEALTHY


Temperatures monitor HEALTHY
Fans monitor HEALTHY
Flash monitor HEALTHY
Marginal ports monitor HEALTHY
Faulty ports monitor HEALTHY
Missing SFPs monitor HEALTHY
Fabric Watch is not licensed
Detailed port information is not included

5. [ ] switch interface Type the following command to terminate the switch session:
exit

6. [ ] Repeat Steps 1. [ ] through 5. [ ] for switch B.


7. [ ] From the Linux shell prompt, type the following command to connect to the VPlexcli:
vplexcli

8. [ ] Login as user service.


9. [ ] If both clusters are multiple-engine, repeat Steps 1. [ ] through 8. [ ] on the cluster-2.

Task 33: (If applicable) Change VPLEX Witness Server IP address and/or management server
IP address of cluster-1 and/or cluster-2

CAUTION: Do not enable VPLEX Witness during this task. VPLEX Witness is enabled in Task 40: (If
applicable) Enable VPLEX Witness.

The steps to complete this task vary depending on whether IP addresses have changed for the cluster-1
and/or cluster-2 management server and/or the VPLEX Cluster Witness Server VM.
• If VPLEX Witness is not deployed and only the IP address of the management server for cluster-1
and/or cluster-2 has changed, perform the Change the management server address procedure for
VPLEX Metro or VPLEX Geo procedure in the generator:
• If VPLEX Witness is deployed, and the IP address of the Cluster Witness Server VM has changed,
perform the procedure “Changing the Cluster Witness Server’s public IP address“ in the generator.
• If VPLEX witness is deployed, and the IP address of the management server for cluster-1 and/or
cluster-2 has changed, perform “Changing the management server IP address and reconfiguring the
three-way VPN between the management servers and the Cluster Witness Server” in the generator.

Task 34: Verify the VPN connectivity


1. [ ] VP lexcli-1 At the VPlexcli prompt, type the following command to confirm that the VPN tunnel
has been established, and that the local and remote directors are reachable from management
server-1:
vpn status

version: 2.9.0.73

Page 35 of 41
2. [ ] In the output (shown in the following example), confirm that IPSEC is up
VPlexcli:/> vpn status
Verifying the VPN status between the management servers...
IPSEC is UP
Remote Management Server at IP Address 10.31.25.27 is reachable
Remote Internal Gateway addresses are reachable

Note: If VPLEX Witness is deployed, vpn status also displays connectivity to the VPLEX Witness
Server VM.

3. [ ] VP lexcli-2 Repeat Step 1. [ ]. On cluster-2.

Task 35: (If applicable) Enable battery-conditioning on the SPS of cluster-1 and cluster-2
Re-enable battery conditioning on the SPSs that were disabled in Task 13:Disable battery-conditioning on
cluster-1 and cluster-2, if enabled.
1. [ ] VP lexcli-1 This step varies depending on whether all the SPS units on cluster-1 and cluster-2
had battery conditioning disabled in Task 13:.
• If all SPS units in cluster-1 and cluster-2 had battery conditioning disabled in Task 13:, type the
following command to re-enable battery conditioning on them:
VPlexcli:/> battery-conditioning enable -s /engines/*/stand-by-power-supplies/*

Battery conditioning enabled on backup battery units 'engine-2-1|stand-by-power-


supply-a, engine-2-1|stand-by-power-supply-b, engine-2-2|stand-by-power-supply-
a, engine-2-2|stand-by-power-supply-b'.
.
.
.
• If only some of the SPS units in cluster-1 and cluster-2 had battery conditioning disabled Task 13:,
type the following command for each SPS where it was disabled:
battery-conditioning enable -s <sps context>

For example:
VPlexcli:/> battery-conditioning enable -s /engines/engine-2-1/stand-by-power-
supplies/stand-by-power-supply-a/

Battery conditioning enabled on backup battery units 'engine-2-1|stand-by-power-


supply-a'.

2. [ ] Type the battery-conditioning summary command, and confirm that battery conditioning has
been re-enabled for all SPS units in both cluster-1 and cluster-2 that had battery conditioning enabled
before Task 13:.

Task 36: (If applicable) Power up RecoverPoint cluster


If a RecoverPoint cluster that uses VPLEX virtual volumes from cluster-1 or cluster-2 for its repository
volume was powered down in Task 11:(If applicable) Power down RecoverPoint, power up RecoverPoint.
Refer to the procedures in the RecoverPoint documentation.

Task 37: (If applicable) Enable RecoverPoint consistency groups using VPLEX volumes.
If a RecoverPoint consistency groups were disabled in Task 10:(If applicable) Disable RecoverPoint
consistency groups using VPLEX volumes, perform this task to enable those consistency groups.
Refer to the procedures in the RecoverPoint documentation.

version: 2.9.0.73

Page 36 of 41
1. [ ] Login to the RecoverPoint GUI for each RecoverPoint cluster attached to VPLEX cluster-1 or
cluster-2.
2. [ ] Enable each RecoverPoint consistency group that was disabled in Task 10:(If applicable) Disable
RecoverPoint consistency groups using VPLEX volumes.
3. [ ] Repeat these steps for every RecoverPoint cluster attached to the VPLEX cluster-1 or cluster-2.

Task 38: Verify the health of the clusters


1. [ ] VP lexcli-1 Type the following command on cluster-1 and confirm that the operational and health
states appear as ok:
health-check

2. [ ] VP lexcli-2 Repeat step 1. [ ] on cluster-2.

Task 39: (If applicable) Resume volumes at cluster-1 and cluster-2


If the consistency groups and the distributed storage not in consistency groups have auto-resume set to
false, those volumes will not automatically unsuspend when cluster-1 and cluster-2 are restored.
Follow these steps to resume volumes that do not have auto-resume set to true:
:
1. [ ] VP lexcli-1 From the VPlexcli prompt on cluster-1, type the following command to display if any
consistency groups require resumption:
consistency-group summary

Look for any consistency groups with requires-resume-at-loser.


2. [ ] VP lexcli-1 Type the following command for each consistency groups that has the requires-
resume-at-loser:
cd /clusters/cluster-1/
consistency-group resume-at-loser –c <cluster> –g <consistency-group>

3. [ ] VP lexcli-1 Type the following command to display whether any volumes outside of a
consistency group require resumption on cluster-1:
ll /clusters/cluster-1/virtual-volumes/

4. [ ] VP lexcli-1 Type the following command to resume at the loser cluster for all distributed volumes
not in consistency groups:
device resume-link-up -f –a

5. [ ] VP lexcli-1 Repeat Steps 3. [ ] and 4. [ ] for cluster-2.

Task 40: (If applicable) Enable VPLEX Witness


If VPLEX Witness is deployed, and was disabled in Task 19:(If applicable) Disable VPLEX Witness,
complete this task to re-enable VPLEX Witness.

1. [ ] VP lexcli-1 Type the following commands to enable VPLEX Witness on cluster-1 and confirm
that it is enabled:
cluster-witness enable

cd /cluster-witness

version: 2.9.0.73

Page 37 of 41
ls

Output example if VPLEX witness is enabled:


VPlexcli:/cluster-witness> ls
Attributes:
Name Value
------------- -------------
admin-state enabled
private-ip-address 128.221.254.3
public-ip-address 10.31.25.45

Contexts:
Components

2. [ ] Confirm VPLEX witness is in contact with both clusters:


VPlexcli:/> ll cluster-witness/components/
/cluster-witness/components:
Name ID Admin State Operational State Mgmt Connectivity
--------- -- ----------- ------------------- -----------------
cluster-1 1 enabled in-contact ok
cluster-2 2 enabled in-contact ok
server - enabled clusters-in-contact ok

Confirm ‘Admin State’ is ‘enabled’ and ‘Mgmt Connectivity’ is ‘ok’ for all three components.
Confirm ‘Operational State’ is ‘in-contact’ for clusters and ‘clusters-in-contact’ for server.

Task 41: Check rebuild status and wait for rebuilds to complete

Note: Rebuilds may take some time to complete while I/O is in progress. For more information on
rebuilds, please check the VPLEX Administration Guide -> Data migration -> About rebuilds section.

1. [ ] VP lexcli-1 Type the rebuild status command and verify that all rebuilds are complete.
rebuild status

If rebuilds are complete, the command will report the following output:

Note: If migrations are ongoing, they are displayed under the rebuild status. Ignore the status of
migration jobs in the output.

Global rebuilds:
No active global rebuilds.
Local rebuilds:
No active local rebuilds

2. [ ] VP lexcli-2 Repeat Step 1. [ ] on cluster-2.

Task 42: (If applicable) Restore the original rule-sets for consistency groups
If you changed the rule-sets for synchronous consistency groups in Task 16: then make the cluster
selected in Task 15: as the winner for all distributed synchronous consistency groups.

Perform the following steps to change the rule-sets to their original value.

Note: Skip this task if you do not want to change the rule-sets.

See Table 2 for the list of consistency groups.

version: 2.9.0.73

Page 38 of 41
1. [ ] To restore the original rule-sets, type the following commands, where consistency-group_name
is the name of a consistency-group, original rule-set is the rule set in Table 2 and delay is the delay
set for the consistency-group:
cd <consistency-group_name>
set-detach-rule <original rule-set> –-delay <delay>
cd ..
2. [ ] Repeat Step 1 for every consistency group listed in Table 2.
3. [ ] To verify the rule-set name change, type the following command:
ll /clusters/cluster-1/consistency-groups/
4. [ ] In the output, confirm that all the consistency groups listed in Table 2 are restored to their original
detach rules.

Task 43: (If applicable) Restore the original rule-sets for distributed devices
If you changed the rule-set name for distributed devices to make cluster-2 as the winner in Task 17:, then
make the cluster selected in Task 15: as the winner for all distributed devices outside consistency-groups.
Perform the following steps to change the rule-set to its original value.

1. [ ] Change the rule-set of distributed devices

Note: You can change the rule-set for all distributed devices, or for selected distributed devices.

To change the rule-set for distributed devices, type the following command from the /distributed-
storage/distributed-devices context:
set *::rule-set-name <rule-set-name>
To change the rule-set for selected distributed devices, type the following commands, where
distributed_device_name is the name of a device listed in Table 3:
cd <distributed_device_name>
set rule-set-name <rule-set-name>
cd ..
2. [ ]To verify the rule-set name changes, type the following command:
ll /distributed-storage/distributed-devices
3. [ ] In the output, confirm that all the distributed devices listed in Table 3.are restored to the original
detach rule.

Task 44: (If applicable) Enable call-home


1. [ ] VP lexcli-1 If call-home was disabled in Task 14:Disable call-home on cluster-1 and cluster-2, if
enabled, type the following commands to enable call-home on cluster-1 and confirm that it is enabled:
cd /notifications/call-home

set enabled true

ls

version: 2.9.0.73

Page 39 of 41
Output example if call-home is enabled:
Attributes:
Name Value
------- -----
enabled true

2. [ ] VP lexcli-2 Repeat Step1. [ ] on cluster-2 if it was disabled on cluster-2 during shutdown in


Task 14:Disable call-home on cluster-1 and cluster-2, if enabled’.

Task 45: Collect diagnostics


1. [ ] VP lexcli-1 Type the following command, to collect configuration information and log files from
all directors and the management server:
collect-diagnostics --minimum

The information is collected, compressed in a zip file, and placed in the directory /diag/collect-
diagnostics-out on the management server.
2. [ ] After the log collection is complete, use FTP or SCP to transfer the logs from /diag/collect-
diagnostics-out to another machine.

Task 46: Exit the PuTTY sessions


1. [ ] VP lexcli-1 VP lexcli-2 Type the following command to exit the VPlexcli:
exit

2. [ ] Linux shell-1 Linux shell-2 Type the following command to exit the shell sessions:
exit

Task 47: (If applicable) Restore the default cabling arrangement


If you used a service laptop to access the management server, use the steps in this task to restore the
default cable arrangement.
The steps to restore the cabling vary depending on whether VPLEX is installed in an EMC cabinet or non-
EMC cabinet:
• EMC cabinet:

a. Disconnect the red service cable from the Ethernet port on your laptop, and remove your laptop
from the laptop tray.
b. Slide the cable back through the cable tie until only one or two inches protrude through the tie, and
then tighten the cable tie.
c. Slide the laptop tray back into the cabinet.
d. Replace the filler panel at the U23 position.
e. If you used the cabinet’s spare Velcro straps to secure any cables out of the way temporarily,
return the straps to the cabinet.
• Non-EMC cabinet:

a. Disconnect the red service cable from your laptop.


b. Coil the cable into a loose loop and hang it in the cabinet. (Leave the other end connected to the
VPLEX management server.)

version: 2.9.0.73

Page 40 of 41
Task 48: Restore your service laptop settings
If you changed or disabled any settings on your laptop before starting this procedure, restore the
settings.

Task 49: Remount VPLEX volumes on hosts connected to cluster-1 and cluster-2
Note: This step requires access to the hosts accessing the storage through the VPLEX clusters.
Coordinate this task with host administrators if you do not have access to the hosts.

1. [ ] Perform a scan on the hosts and discover the VPLEX volumes.


2. [ ] Mount the necessary file systems on the VPLEX volumes.
3. [ ] Start the necessary I/O applications on the host.

version: 2.9.0.73

Page 41 of 41

You might also like