VPLEX Administration Guide
VPLEX Administration Guide
GeoSynchrony
Release 5.3
Administration Guide
P/N 302-000-777
REV 01
Copyright <original pub year> - 2014 EMC Corporation. All rights reserved. Published in the USA.
Published March, 2014
EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without
notice.
The information in this publication is provided as is. EMC Corporation makes no representations or warranties of any kind with respect
to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular
purpose. Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.
EMC2, EMC, EMC Centera, EMC ControlCenter, EMC LifeLine, EMC OnCourse, EMC Proven, EMC Snap, EMC SourceOne, EMC Storage
Administrator, Acartus, Access Logix, AdvantEdge, AlphaStor, ApplicationXtender, ArchiveXtender, Atmos, Authentica, Authentic
Problems, Automated Resource Manager, AutoStart, AutoSwap, AVALONidm, Avamar, Captiva, Catalog Solution, C-Clip, Celerra,
Celerra Replicator, Centera, CenterStage, CentraStar, ClaimPack, ClaimsEditor, CLARiiON, ClientPak, Codebook Correlation
Technology, Common Information Model, Configuration Intelligence, Connectrix, CopyCross, CopyPoint, CX, Dantz, Data Domain,
DatabaseXtender, Direct Matrix Architecture, DiskXtender, DiskXtender 2000, Document Sciences, Documentum, elnput, E-Lab,
EmailXaminer, EmailXtender, Enginuity, eRoom, Event Explorer, FarPoint, FirstPass, FLARE, FormWare, Geosynchrony, Global File
Virtualization, Graphic Visualization, Greenplum, HighRoad, HomeBase, InfoMover, Infoscape, InputAccel, InputAccel Express, Invista,
Ionix, ISIS, Max Retriever, MediaStor, MirrorView, Navisphere, NetWorker, OnAlert, OpenScale, PixTools, Powerlink, PowerPath,
PowerSnap, QuickScan, Rainfinity, RepliCare, RepliStor, ResourcePak, Retrospect, RSA, SafeLine, SAN Advisor, SAN Copy, SAN
Manager, Smarts, SnapImage, SnapSure, SnapView, SRDF, StorageScope, SupportMate, SymmAPI, SymmEnabler, Symmetrix,
Symmetrix DMX, Symmetrix VMAX, TimeFinder, UltraFlex, UltraPoint, UltraScale, Unisphere, Viewlets, Virtual Matrix, Virtual Matrix
Architecture, Virtual Provisioning, VisualSAN, VisualSRM, VMAX, VNX, VNXe, Voyence, VPLEX, VSAM-Assist, WebXtender, xPression,
xPresso, YottaYotta, the EMC logo, and the RSA logo, are registered trademarks or trademarks of EMC Corporation in the United States
and other countries. Vblock is a trademark of EMC Corporation in the United States.
VMware, and <insert other marks in alphabetical order>, are registered trademarks or trademarks of VMware, Inc. in the United States
and/or other jurisdictions.
All other trademarks used herein are the property of their respective owners.
For the most up-to-date regulatory document for your product line, go to the technical documentation and advisories section on the
EMC online support website.
CONTENTS
Chapter 1
Preface
Chapter 2
Chapter 3
Meta Volumes
About meta-volumes .....................................................................
Create a meta-volume....................................................................
Back up the meta-volume ..............................................................
Move a meta-volume .....................................................................
Rename a meta-volume .................................................................
Delete a meta-volume....................................................................
Display meta-volume .....................................................................
Chapter 4
System Management
SPS battery conditioning ...............................................................
Call-home notifications and system reporting ................................
Event log locations ........................................................................
Hardware acceleration with VAAI....................................................
Chapter 5
15
16
18
21
22
22
23
27
30
32
34
Distributed Devices
Additional documentation .............................................................
About distributed devices..............................................................
Logging volumes ...........................................................................
Rule-sets .......................................................................................
Configure distributed devices ........................................................
Create a virtual volume on a distributed device ..............................
Expose a virtual volume to hosts ...................................................
Expose a virtual volume to a remote host.......................................
Add a local mirror to distributed device..........................................
Remove a local mirror from a distributed device.............................
Create a distributed device from an exported volume.....................
Display/enable/disable automatic device rebuilds ........................
EMC VPLEX Administration Guide
41
41
42
44
54
60
61
62
63
64
65
66
1
Contents
Chapter 6
Provisioning Storage
Provisioning Overview ...................................................................
About VPLEX integrated storage provisioning .................................
Provisioning storage using VIAS.....................................................
Provisioning storage using EZ provisioning ....................................
Provisioning storage using advanced provisioning .........................
Chapter 7
67
69
71
72
73
73
74
77
77
78
84
84
Volume expansion
Overview ....................................................................................... 87
Determine volume expansion-method ........................................... 87
Expand the virtual volume ............................................................. 89
Chapter 8
Data migration
About data migrations ................................................................... 97
About rebuilds............................................................................... 99
One-time data migrations ............................................................ 100
Batch migrations ......................................................................... 104
Chapter 9
Chapter 10
Consistency Groups
About VPLEX consistency groups .................................................
Properties of consistency groups .................................................
Manage consistency groups.........................................................
Operate a consistency group........................................................
113
114
119
124
125
128
129
136
145
166
Contents
Chapter 11
VPLEX Witness
Introduction ................................................................................
Failures in Metro systems ............................................................
Failures in Geo systems ...............................................................
Install, enable, and manage VPLEX Witness .................................
VPLEX Witness operation .............................................................
Chapter 12
171
173
177
181
183
Cache vaults
About cache vaulting ................................................................... 197
The vaulting process.................................................................... 201
Recovery after vault ..................................................................... 202
Chapter 13
RecoverPoint
RecoverPoint CLI context.............................................................. 205
Configuration/operation guidelines ............................................. 207
Management tools....................................................................... 214
Chapter 14
217
219
221
225
234
240
242
Contents
Preface
As part of an effort to improve and enhance the performance and capabilities of its
product line, EMC from time to time releases revisions of its hardware and software.
Therefore, some functions described in this document may not be supported by all
revisions of the software or hardware currently in use. Your product release notes provide
the most up-to-date information on product features.
If a product does not function properly or does not function as described in this
document, please contact your EMC representative.
About this guide
Related
documentation
Conventions used in
this document
This guide is part of the VPLEX documentation set, and is intended for use by customers
and service providers to configure and manage a storage environment.
Related documents (available on EMC Support Online) include:
A caution contains information essential to avoid data loss or damage to the system or
equipment.
Preface
IMPORTANT
An important notice contains information essential to operation of the software.
Typographical conventions
EMC uses the following type style conventions in this document:
Normal
Bold
Italic
Courier
Used for:
System output, such as an error message or script
URLs, complete paths, filenames, prompts, and syntax when shown
outside of running text
Courier bold
Used for:
Specific user input (such as commands)
Courier italic
[]
{}
...
Preface
Technical support For technical support, go to the EMC Support site. To open a service
request, you must have a valid support agreement. Please contact your EMC sales
representative for details about obtaining a valid support agreement or to answer any
questions about your account.
Your comments
Your suggestions will help us continue to improve the accuracy, organization, and overall
quality of the user publications. Please send your opinion of this document to:
[email protected]
Preface
CHAPTER 1
CLI Workspace and User Accounts
This chapter describes how to use the VPLEX command line interface (CLI) to configure the
CLI workspace and to manage user accounts.
At next login to the management server, the new login banner is displayed:
login as: service
VPLEX cluster-1/Hopkinton
Test lab 3, Room 6, Rack 47
Metro with RecoverPoint CDP
Password:
2. Determine the ID of the filter controlling the display of messages to the console. The
console filter has the following attributes:
Threhold=>=0
Destination= null
Consume=true
3. Use the log filter destroy command to delete the existing console logging filter.
VPlexcli:> log filter destroy 1
4. Use the log filter create command to create a new filter for the console with the
required threshold:
VPlexcli:> log filter create --threshold <n> --component logserver
where n is 0-7.
Note: The threshold value filters all messages with greater or equal severity.
To see critical (2) and above (0 and 1), set the threshold at 3.
To see error (3) and above (0, 1, and 2) set the threshold at 4.
10
4. Type the password for the new username. Passwords must be at least eight
characters, and may contain numbers, letters, and special characters. No spaces. No
dictionary words.
Confirm password:
11
localuser@ManagementServer:~>
After this initial login is completed, subsequent logins behave as described in Managing
User Accounts on page 11
3. Type the new password. Passwords must be at least 14 characters long, and must not
be dictionary words.
A prompt to confirm the new password appears:
Confirm password:
12
13
14
CHAPTER 2
Meta Volumes
This chapter describes the procedures to manage metadata and meta-volumes using the
VPLEX CLI:
15
16
18
19
21
22
22
23
About meta-volumes
VPLEX metadata includes virtual-to-physical mappings, data about devices, virtual
volumes, and system configuration settings.
Metadata is stored in cache and backed up on specially designated external volumes
called meta-volumes.
Meta-volumes are created during system setup.
When a cluster is initially configured, the meta-volume must be the first storage presented
to VPLEX. This prevents the meta-volume from being accidentally overwritten.
After the meta-volume is configured, updates to the metadata are written to both the
cache and the meta-volume when the VPLEX configuration is modified.
Backup meta-volumes are point-in-time snapshots of the current metadata, and provide
extra protection before major configuration changes, refreshes, or migrations.
Metadata is read from the meta-volume only during the boot of each director.
Meta-volume backups are created:
Refer to the VPLEX Configuration Guide for more details about the criteria to select storage
used for meta-volumes.
If the meta-volume is configured on a CLARiiON array, it must not be placed on the vault
drives of the CLARiiON.
Meta Volumes
15
Meta Volumes
Availability is critical for meta-volumes. The meta-volume is essential for system recovery.
The best practice is to mirror the meta-volume across two or more back-end arrays to
eliminate the possibility of data loss. Choose the arrays used to mirror the meta-volume
such that they are not required to migrate at the same time.
Do not create a new meta-volume using volumes from a single storage array. Single array
meta-volumes are not a high availability configuration and are a single point of failure.
If VPLEX temporarily loses access to all meta-volumes, the current metatdata in cache is
automatically written to the meta-volumes when access is restored.
If VPLEX permanently loses access to both meta-volumes, it will continue to operate based
on the metadata in memory. Configuration changes are suspended until a new
meta-volume is created.
Note: If the VPLEX loses access to all meta-volumes, and all directors either fail or are
re-booted, changes made to the meta-data (the VPLEX configuration) after access was lost
cannot be recovered.
Create a meta-volume
To create a meta-volume:
1. Use the configuration show-meta-volume- candidates command to display possible
candidates:
Note: The following example output is truncated.
VPlexcli:/> configuration show-meta-volume-candidates
Name
Capacity...Array Name
---------------------------------------- -------- -----------------------VPD83T3:60060480000190100547533030364539 187G .....EMC-SYMMETRIX-190100547
VPD83T3:60000970000192601707533031333132 98.5G.....EMC-SYMMETRIX-192601707
VPD83T3:60000970000192601707533031333133 98.5G.....EMC-SYMMETRIX-192601707
VPD83T3:60000970000192601707533031333134 98.5G.....EMC-SYMMETRIX-192601707
VPD83T3:60000970000192601707533031333135 98.5G.....EMC-SYMMETRIX-192601707
VPD83T3:60000970000192601707533031333136 98.5G.....EMC-SYMMETRIX-192601707
VPD83T3:60000970000192601707533031333137 98.5G.....EMC-SYMMETRIX-192601707
VPD83T3:60000970000192601707533031333138 98.5G.....EMC-SYMMETRIX-192601707
VPD83T3:6006016049e02100442c66c8890ee011 80G ......EMC-CLARiiON-FNM00083800068
.
16
Meta Volumes
.
.
The log summary for configuration automation has been captured in
/var/log/VPlex/cli/VPlexconfig.log
The task summary and the commands executed for each automation task has been captured in
/var/log/VPlex/cli/VPlexcommands.txt
2. Use the meta-volume create command to create a new meta-volume. The syntax for
the command is:
meta-volume create --name meta-volume_name --storage-volumes
storage-volume_1,storage-volume_2,storage-volume_3
IMPORTANT
Specify two or more storage volumes. Storage volumes must be:
- unclaimed
- on different arrays
VPlexcli:meta-volume create --name ICO_META_1_1_Metadata --storage-volumes
VPD83T3:60000970000192601707533031333136, VPD83T3:60060480000190300487533030343445
Value
--------------------true
false
24511424
4K
79.6G
31968
raid-1
Wait for the operational status field to transition to ok (while the meta-volume
synchronizes with the mirror) before proceeding with other tasks.
Create a meta-volume
17
Meta Volumes
current-metadata-namebackup_yyyyMMMdd_HHmms
Create a backup meta-volume:
Unclaimed
78 GB or larger
18
Meta Volumes
Using the cluster configdump command to dump a large configuration may take a long
time.
The information collected by the cluster configdump command can be useful to
identify problems in case of a failure. Administrators must weigh the value of the
information collected against the amount of time required to dump a large
configuration when deciding whether to perform a configdump.
IMPORTANT
No modifications should be made to VPLEX during the remainder of the backup procedure.
Make sure that all other users are notified.
4. Use the ll command in the system-volumes context to verify that the meta-volume is
Active and its Ready state is true.
For example:
VPlexcli:/clusters/cluster-1/system-volumes> ll
For the storage-volumes value, type the system ID for two or more storage volumes
identified in Before you begin.
For example:
VPlexcli:meta-volume backup --storage-volumes
VPD83T3:60060480000190300487533030354636,
VPD83T3:60060480000190300487533030343445
Unclaimed
78 GB or larger
Open a second Putty session to each cluster to display the client log files at
/var/log/Vplex/cli directory. Use these sessions to watch for call home events.
To back up the meta-volume for a two-cluster VPLEX Metro or Geo:
19
Meta Volumes
Using the cluster configdump command to dump a large configuration may take a long
time.
The information collected by the cluster configdump command can be useful to
identify problems in case of a failure. Administrators must weigh the value of the
information collected against the amount of time required to dump a large
configuration when deciding whether to perform a configdump.
IMPORTANT
No modifications should be made to VPLEX during the remainder of the backup procedure.
Make sure that all other users are notified.
5. At each cluster, use the ll command in the system-volumes context to verify that the
status of the clusters meta-volume is Active and Ready state is true.
For example:
VPlexcli:/clusters/cluster-1/system-volumes> ll
20
Meta Volumes
6. Use the meta-volume backup command to back up the meta-volume at each cluster:
meta-volume backup --storage-volumes storage-volumes --cluster cluster
For the storage-volumes value, type the system ID of one or more storage volumes
identified in Before you begin.
Type the storage volume IDs separated by commas.
For example, at cluster-1:
VPlexcli:/clusters/cluster-1/system-volumes> meta-volume backup --storage-volumes
VPD83T3:60000970000194900383533030454342,VPD83T3:60000970000194900383533030454341 --cluster
cluster-1
IMPORTANT
Perform backup of the meta-volumes at the two clusters in quick succession.
7. Use the ll command to display the new meta-volume at each cluster:
VPlexcli:/clusters/cluster-1/system-volumes> ll
Name
----------------------------------------------------------------new_meta1
new_meta1_backup_2010May24_163810
Volume Type
--------------------meta-volume
meta-volume
Operational
Status
----------ok
ok
Health
State
-----ok
ok
Active
----------true
false
Ready
--------true
true
Geometry
--------------raid-1
raid-1
Block
Count
-------20447744
20447744
Block
Size
----4K
4K
Capacity
--------------78G
78G
Slot s
--------32000
32000
8. The default name assigned to the backup meta-volume includes a timestamp. Verify
that the timestamp for the backup meta-volumes at the two clusters are in quick
succession.
9. Use the second Putty session to verify that no call home events were sent during the
backups.
If a CallHome event was sent, use the meta-volume destroy command to delete the
new meta-volume on each cluster and start over at Step 2 .
VPlexcli:/clusters/cluster-1/system-volumes> meta-volume destroy
new_meta_data_backup_2010May24_163810
Move a meta-volume
To move a meta-volume from one storage volume to another:
1. Use the ll command to display a list of storage volumes on the cluster:
VPlexcli:/> ll /clusters/cluster-1/storage-elements/storage-volumes
21
Meta Volumes
Unclaimed
78 GB or larger
On different arrays
3. Use the meta-volume create command to create a new meta-volume.
Specify the storage volumes identified in Step 2 .
VPlexcli:/engines/engine-1-1/directors> meta-volume create --name meta_dmx --storage-volumes
VPD83T3:6006016037202200966da1373865de11,
VPD83T3:6006016037202200966da1373865de12
Rename a meta-volume
By default, meta-volume names are based on a timestamp. To change the name, do the
following:
1. Navigate to the /clusters/cluster/system-volumes/ context:
VPlexcli:/> cd clusters/cluster-2/system-volumes/
VPlexcli:/clusters/cluster-2/system-volumes>
Delete a meta-volume
IMPORTANT
A meta-volume must be inactive in order to be deleted. Attempts to delete an active
meta-volume fail with an error message.
To delete a meta-volume, do the following:
1. Navigate to the target volumes context.
For example:
22
Meta Volumes
cd clusters/cluster-1/system-volumes/metadata_1/
Value
----------false
false
23592704
4K
4. Type y.
Display meta-volume
Use the ll command to display status for a meta-volume:
VPlexcli:/clusters/cluster-1/system-volumes/ICO_META_1_1_Metadata> ll
/clusters/cluster-1/system-volumes/ICO_META_1_1_Metadata:
Attributes:
Name
---------------------active
application-consistent
block-count
block-size
capacity
component-count
free-slots
geometry
health-indications
health-state
locality
operational-status
ready
rebuild-allowed
rebuild-eta
rebuild-progress
rebuild-status
rebuild-type
slots
stripe-depth
system-id
Value
------------true
false
24511424
4K
79.5G
2
31968
raid-1
[]
ok
local
ok
true
true
done
full
32000
ICO_META_1_1_Metadata
Display meta-volume
23
Meta Volumes
transfer-size
volume-type
Contexts:
Name
---------components
2M
meta-volume
Description
------------------------------------------------------------------The list of components that support this device or system virtual
volume.
Use the ll components/ command to display the component volumes of the meta-volume:
VPlexcli:/clusters/cluster-2/system-volumes/ICO_META_1_1_Metadata> ll components/
/clusters/cluster-2/system-volumes/clus2_MetaVol/components:
Name
Slot
Type
Operational Health
---------------------------------------- Number -------------- Status
State
---------------------------------------- ------ -------------- ----------- -----VPD83T3:60000970000192601707533031333136 0
storage-volume ok
ok
VPD83T3:60060480000190300487533030343445 1
storage-volume ok
ok
Capacity
--------------78G
78G
24
Field
Description
active
application-consist
ent
block-count
capacity
component-count
free-slots
geometry
health-indications
health-state
locality
Meta Volumes
Description
operational status
ready
rebuild-allowed
rebuild-eta
rebuild-progress
rebuild-status
rebuild-type
stripe-depth
system-id
transfer-size
volume-type
Display meta-volume
25
Meta Volumes
26
CHAPTER 3
System Management
This chapter describes how to use the VPLEX CLI to manage battery conditioning,
call-home notifications and system reporting, event log locations, and hardware
acceleration with VAAI.
27
30
32
34
Time windows for manual tests allow only one side (A or B) to run conditioning cycles in a
given period.
Figure 1 shows the conditioning cycle calendar for a typical month:
System Management
27
System Management
The SPS must have 6 hours to fully charge before the allotted conditioning time
expires. Conditioning cycles (including manually requested cycles) start at the
beginning of their scheduled time slot.
The SPS must not have failed a previous conditioning cycle or have any internal
failures.
All power components in the engine related to the SPS must be healthy.
Starting a conditioning cycle during maintenance or system upgrades could disrupt these
operations.
System Management
Additional documentation
Refer to the VPLEX CLI Guide for information about the CLI commands related to battery
conditioning:
29
System Management
Definition
Impact on Performance or
Availability
Critical
(1)
A DU or DL is either highly
probable or has occurred.
System unavailable.
Severe performance degradation.
Yes
Error
(2)
Possible DU or DL.
Requires service intervention.
Yes
Warning
(3)
No performance impact.
Loss of redundancy.
No risk of DU/DL.
Yes
Info
(4)
Informational event.
No action is required.
None.
No
Call-home
Refer to the VPLEX generator Troubleshooting Procedures > Events and Messages for a list
of all events.
Many maintenance activities (such as hardware replacements) generate a flurry of
call-home events. Many such procedures include steps to temporarily disable call-home
during the operation.
If the same event on the same component occurs repeatedly, a call-home is generated for
the first instance of the event, and not again for 8 hours (480 minutes).
For example, if event E1 occurs on a director D1 at the time T1, a call-home is generated. If
the same event E1 is generated on the same component D1 at the time T1 + N minutes,
where N < 480, no call-home is generated.
The interval N is tracked by the management server. If the management server fails, the
counter is reset to 8 hours. After recovery from a management server failure, a call-home
event is sent for the same event/component, even though 8 hours may not have elapsed
since the first call-home for that event/component.
30
System Management
EMC provides an .xml file containing commonly requested modifications to the default
call-home events.
Call-home behaviors changes immediately when the modified events file is applied.
If a customized events file is already applied, applying a new file overrides the existing
file.
If the same event is modified in the customer-specific and EMC-generic file, the
modification specified for that event in the customer-specific file is applied.
If call-home is disabled when the custom events file is applied, the modified events
are saved and applied when call-home is enabled.
System reports - Sent once weekly to the EMC System Reports database. System
reports include information about the configuration and state of the system.
System alerts - Sent in real-time through a designated SMTP server to the EMC. Alerts
are filtered as to whether a service request should be opened with EMC Customer
Service. If a service request is required, it is opened automatically.
SYR is enabled by default, but can be disabled at any time through the GUI or CLI.
IP address of the primary SMTP server used to forward reports to EMC. EMC
recommends using your ESRS gateway as the primary connection address.
31
System Management
(Optional) One or more e-mail addresses of personnel who should receive e-mail
notifications when events occur.
Additional documentation
Refer to the VPLEX generator for the procedure to configure SYR:
Refer to the VPLEX CLI Guide for information about the CLI commands related to call-home
notifications and SYR reporting:
32
Call-home events
System Management
The locations of various logs on the VPLEX management server are listed in Table 5:
Table 5 VPLEX log file locations
Log name
Firmware log
Includes all entries from the entire VPLEX system. Messages are expanded.
On a running management server:
/var/log/VPlex/cli/firmware.log*
In collect-diagnostics output:
smsDump_<datestamp>-<timestamp>\clilogs\
In collect-diagnostics output:
smsDump_<datestamp>-<timestamp>\connectemc\
ZPEM Log
In collect-diagnostics output:
\<director_name>-<datestamp>-<timestamp>\var\log
NSFW log
GeoSynchrony log. NSFW sends events to a syslog-ng service on the director. The
syslog-ng service writes NSFW entries to log files in /var/log and also them to
EMC Common Object Manager (ECOM), which streams the log entries to the
cluster management server to be written to the firmware log.
On a running director:
/var/log/nsfw.log
In collect-diagnostics output:
\<director_name>-<datestamp>-<timestamp>\var\log
DMI log
ZPEM Trace
log
ECOM writes trace logs to a cimom and ecomofl log files. ZPEM writes trace logs
to a ZTrace log. These trace logs are not part of the event logging system.
In collect-diagnostics output:
\<director_name>-<datestamp>-<timestamp>\var\log
33
System Management
WriteSame (16) offloads copying data to and from the array through the hypervisor.
Enabling/disabling CAW
CAW can be enabled/disabled on VPLEX only by EMC Technical Support personnel.
VMware servers discover whether the CAW SCSI command is supported:
Note: To toggle the value: In the vSphere client, toggle host > Configuration > Software >
Advanced Settings > VMFS3.HardwareAcceleratedLocking value to 0 and then 1.
If CAW is not supported or support is disabled, VPLEX returns CHECK CONDITION, ILLEGAL
REQUEST, and INVALID OP-CODE. The ESX server reverts to using SCSI RESERVE and the
VM operation continues.
VM operations may experience significant performance degradation if CAW is not enabled.
VPLEX enables CAW to be enabled/disabled for all storage associated with VPLEX, using a
single command. When CAW is disabled on VPLEX, VPLEX storage volumes, do not include
CAW support information in their responses to inquiries from hosts.
To mark storage CAW disabled:
34
System Management
Enabling/disabling CAW functionality supports exceptional situations such as assisting
EMC Technical Support personnel to diagnose a problem. CAW is enabled by default and
should be disabled only by EMC Technical Support.
Support for CAW can be enabled or disabled at two levels:
storage-view - Enabled or disabled for all existing storage views. A storage view
created after CAW is enabled/disabled at the storage view level inherits the system
default setting. EMC recommends maintaining uniform CAW setting on all storage
views in VPLEX. If CAW must be disabled for a given storage view, it must be disabled
on all existing and future storage views. To make future storage views to reflect the
new setting, change the system default (described below).
system default - Enabled or disabled as a system default. A storage view created after
CAW is enabled/disabled at the system default level inherits the system default
setting. If the system default is enabled, CAW support for the new storage view is also
enabled
Use the ls command in /clusters/cluster context to display the CAW system default
setting:
VPlexcli:/> ls /clusters/cluster-1
/clusters/cluster-1:
Attributes:
Name
---------------------allow-auto-join
auto-expel-count
auto-expel-period
auto-join-delay
cluster-id
connected
default-cache-mode
default-caw-template
Value
-------------------------------------------true
0
0
0
1
true
synchronous
true
35
System Management
.
.
.
CAW statistics
CAW performance statistics are included for front-end volume (fe-lu), front-end port
(fe-prt), and front-end director (fe-director) targets.
See Front-end volume (fe-lu) statistics on page 246, Front-end port (fe-prt) statistics
on page 247, and Front-end director (fe-director) statistics on page 246
Statistics for fe-director targets are collected as a part of the automatically created
perpetual monitor.
You can create a monitor to collect CAW statistics, which can be especially useful for fe-lu
targets (because there can be very large numbers of volumes involved, these statistics are
not always collected). See Example: Send CAW statistics to the management server on
page 229
WriteSame (16)
The WriteSame (16) SCSI command provides a mechanism to offload initializing virtual
disks to VPLEX. WriteSame (16) requests the server to write blocks of data transferred by
the application client multiple times to consecutive logical blocks.
WriteSame (16) is used to offload VM provisioning and snapshotting in vSphere to VPLEX.
WriteSame (16) enables the array to perform copy operations independently without using
host cycles. The array can schedule and execute the copy function much more efficiently.
VPLEX support for WriteSame (16) is enabled by default.
36
System Management
Note: To toggle the value: In the vSphere client, toggle host > Configuration > Software >
Advanced Settings > VMFS3.HardwareAcceleratedLocking value to 0 and then 1.
VM operations may experience significant performance degradation if WriteSame (16) is
not enabled.
VPLEX allows WriteSame (16) to be enabled/disabled for all storage associated with
VPLEX, using a single command. When WriteSame (16) is disabled on VPLEX, VPLEX
storage volumes, do not include WriteSame (16) support information in their responses to
inquiries from hosts.
Support for WriteSame (16) can be enabled or disabled at two levels:
storage-view - Enabled or disabled for all existing storage views. A storage view
created after WriteSame (16) is enabled/disabled at the storage view level inherits
the system default setting. EMC recommends maintaining uniform WriteSame (16)
setting on all storage views in VPLEX.
If WriteSame (16) must be disabled for a given storage view, it must be disabled on all
existing and future storage views. To make future storage views to reflect the new
setting, change the system default (described below).
system default - Enabled or disabled as a system default. A storage view created after
WriteSame (16) is enabled/disabled at the system default level inherits the system
default setting. If the system default is enabled, WriteSame (16) support for the new
storage view is also enabled.
To disable the Write Same 16 default template, you MUST disable Write Same 16 for all
existing views, and disable Write Same 16 template so all future views will be Write Same
16 disabled.
To enable the Write Same 16 default template, you MUST enable Write Same 16 for all
existing views, and enable Write Same 16 template so all future views will be Write Same
16 enabled.
37
System Management
/clusters/cluster-2/exports/storage-views/FE-Logout-test:
Name
Value
------------------------ ----------------------------------------------------------------caw-enabled
false
.
.
.
/clusters/cluster-2/exports/storage-views/default_quirk_view:
Name
Value
------------------------ -----------------------------------------.
.
.
write-same-16-enabled
false
Use the ls command in /clusters/cluster context to display the WriteSame (16) system
default setting:
VPlexcli:/> ls /clusters/cluster-1
/clusters/cluster-1:
VPlexcli:/clusters/cluster-1> ls
Attributes:
Name
Value
---------------------------------------------------------------------------allow-auto-join
true
auto-expel-count
0
auto-expel-period
0
auto-join-delay
0
cluster-id
1
connected
true
default-cache-mode
synchronous
default-caw-template
true
default-write-same-16-template false
.
.
.
System Management
39
System Management
40
CHAPTER 4
Distributed Devices
This chapter provides procedures to manage distributed devices using VPLEX CLI.
Additional documentation.......................................................................................
About distributed devices .......................................................................................
Logging volumes .....................................................................................................
Rule-sets.................................................................................................................
Configure distributed devices..................................................................................
Create a virtual volume on a distributed device .......................................................
Expose a virtual volume to hosts .............................................................................
Expose a virtual volume to a remote host ................................................................
Add a local mirror to distributed device ...................................................................
Remove a local mirror from a distributed device ......................................................
Create a distributed device from an exported volume ..............................................
Display/enable/disable automatic device rebuilds..................................................
Configure I/O resumption after a network outage ....................................................
About auto mirror isolation .....................................................................................
Storage volume degradation ...................................................................................
Mirror isolation .......................................................................................................
Storage volumes health restoration.........................................................................
Mirror un-isolation ..................................................................................................
Enabling and disabling auto mirror isolation ...........................................................
41
41
42
44
54
60
61
62
63
64
65
66
67
69
71
72
73
73
74
Additional documentation
Refer to the EMC VPLEX CLI Guide for detailed information about the CLI commands to
create and manage distributed devices.
Refer to the EMC VPLEX Product Guide for general information about distributed
devices.
Distributed Devices
41
Distributed Devices
You can configure up to 8000 distributed devices in a VPLEX system. That is, the total
number of distributed virtual volumes plus the number of top-level local devices must not
exceed 8000.
All distributed devices must be associated with a logging volume. During a link outage,
the logging volume is used to map the differences between the legs of a DR1.
When the link is restored, the legs are resynchronized using the contents of their logging
volumes.
All distributed devices must have a detach rule-set to determine which cluster continues
I/O when connectivity between clusters is lost.
Logging volumes
This section describes the following topics:
After the inter-cluster link or leg is restored, the VPLEX system uses the information in
logging volumes to synchronize the mirrors by sending only changed blocks across the
link.
Logging volumes also track changes during loss of a volume when that volume is one
mirror in a distributed device.
Note: Logging volumes are not used to optimize re-syncs on local RAID-1s.
Single-cluster systems and systems that do not have distributed devices do not require
logging volumes.
During and after link outages, logging volumes are subject to high levels of I/O. Thus,
logging volumes must be able to service I/O quickly and efficiently.
EMC recommends:
42
Stripe logging volumes across several disks to accommodate the high level of I/O that
occurs during and after link outages.
Distributed Devices
Mirror logging volumes across two or more back-end arrays, as they are critical to
recovery after the link is restored.
Arrays for mirroring logging volumes should be chosen such that they will not need to
migrate at the same time.
Use the logging-volume create command to create a logging volume. The syntax for the
command is:
logging-volume create --name name --geometry [raid-0 |raid-1]
--extents context-path --stripe-depth
Greater than zero, but not greater than the number of blocks of the smallest element
of the RAID 0 device being created
A multiple of 4 K bytes
A depth of 32 means 128 K (32 x 4K) is written to the first disk, and then the next 128 K is
written to the next disk.
Concatenated RAID devices are not striped.
Best practice regarding stripe depth is to follow the best practice of the underlying array.
For example:
VPlexcli:/> cd clusters/cluster-1/system-volumes/
Logging volumes
43
Distributed Devices
Use the logging-volume add-mirror command to add a mirror to the specified logging
volume. The syntax for the command is:
logging-volume add-mirror --logging-volume logging-volume --mirror mirror
Rule-sets
This section describes the following topics:
44
Distributed Devices
About rule-sets
Rule-sets are predefined rules that determine which cluster continues I/O when
connectivity between clusters is lost. Rule-set apply to devices that are not members of
consistency groups.
A cluster loses connectivity to its peer cluster when:
Note: The cluster expel command also causes clusters to lose connectivity to one another.
When a loss of connectivity occurs, VPLEX:
If connectivity is not restored when the timer expires (default is 5 seconds), VPLEX:
Resumes I/O on the leg of the distributed device (the winning cluster) as
determined by the devices rule-set
Writes to the distributed device from the losing cluster are suspended until connectivity is
restored.
When the inter-cluster link is restored, the I/O written to the winning cluster is
re-synchronized to the losing cluster.
The rules for determining the number of seconds to wait, which leg of a distributed device
is resumed, and which remains suspended are contained in rule-sets.
A rule-set consists of a container (the rule-set) and one rule.
Rules have two attributes:
Delay - The number of seconds between the link outage and when the actions defined
by the rule-set (resume I/O to the winning cluster, keep I/O to the losing cluster
suspended) begin. The default is 5 seconds.
All distributed devices must have a rule-set. A cluster may be the winning cluster for
some distributed devices, and the losing cluster for other distributed devices.
Most I/O workloads require specific sets of virtual volumes to resume on one cluster and
remain suspended on the other cluster.
Rule-sets
45
Distributed Devices
Cluster-1
Cluster-2
cluster-1-detaches
Services I/O
Suspends I/O
cluster-2-detaches
Suspends I/O
Services I/O
EMC recommends that only the two default rule-sets be applied to distributed devices.
The default value of a distributed devices rule-set is determined by the management
server on which the device was created.
If a device is created on the management server for cluster-1, the default rule-set for that
device is cluster-1-detaches.
Rule-sets are located in the distributed-storage/rule-sets context.
Rules are located under their rule-set context.
Auto-resume
If auto-resume is set to false, the distributed device on the losing cluster remains
suspended until I/O is manually resumed by the system administrator (using the
device resume-link-up command).
If auto-resume is set to true, I/O may start immediately after the link is restored.
After I/O is resumed to the mirror leg on the losing cluster, any data written to the device
on the winning cluster during the outage is resynchronized from the leg on the winning
cluster.
Use the set command to configure a devices auto-resume attribute.
For custom rule-sets, leave the detach delay timer at the default value of 5 seconds.
Setting the detach delay lower than 5 seconds can result in unnecessary cluster
detaches during periods of network instability. Multiple cluster detaches in a short
period of time can result in unnecessary data rebuilds and reduced performance.
46
Distributed Devices
Configure detach rules based on the cluster/site that is expected to continue I/O
during an outage.
If a host application uses more than one distributed device, all distributed devices for
that application should have the same rule-set (to resume I/O on the same cluster).
Both clusters write to the different legs of the same virtual volume.
When connectivity is restored, the administrator picks the winner cluster, meaning
that one of the legs is used as the source to rebuild.
Any data written to the losing cluster during the network communication outage is
overwritten.
Rule-sets and manual detaches must not result in conflicting detaches. Conflicting
detaches result in data loss (on the losing cluster), a full rebuild and degraded
performance during the full rebuild.
VPLEX islands
cluster-1
cluster-2
For VPLEX Metro and Geo configurations, islands are mostly synonymous with clusters.
Rule-sets
47
Distributed Devices
Manage rule-sets
The ds rule-set create command creates custom rule-sets. When a new rule-set is created,
the VPLEX system creates a sub-context under the rule-sets context.
Rules themselves are added to these new sub-contexts.
After a rule is added to a rule-set, the rule can be applied (attached) to a distributed
device.
Create a rule-set
Use the ds rule-set create rule-set-name command to create a new rule-set. The new
rule-set is empty upon creation.
Note: The ds rule-set create command automatically creates a new sub-context, with the
same name as the new rule-set.
In the following example, a new rule-set named TestRuleSet is created:
VPlexcli:/> ds rule-set create --name TestRuleSet
Name
-------------TestRuleSet
PotentialConflict
----------------false
UsedBy
------
VPlexcli:/>
2. Use the rule island-containing command to add a rule to describe when to resume I/O
on all clusters in the island containing the specified cluster. The syntax for the
command is:
rule island-containing --clusters context-path,context-path --delay
delay
48
Distributed Devices
Value
-----------------------ruleset_5537985253109250
false
[]
Contexts:
Name
Description
----- -----------------------------------rules The list of rules for this rule set.
Note: To apply a rule-set to a distributed device when the device is created, see Create a
distributed device on page 58.
To modify the rule-set applied to a distributed device, see Modify the rule-set attached to
a distributed device on page 50.
Rule-sets
49
Distributed Devices
2. Optionally, use the ll command to display the names of the distributed devices:
VPlexcli:/distributed-storage/distributed-devices> ll
Name
Status
Operational Health Auto
Rule Set Name
----- ------- Status
State
Resume ---------------------- ------- ----------- ------ ------ -----------------dd_00 running ok
ok
true
cluster-1-detaches
dd_01 running ok
ok
true
cluster-1-detaches
dd_02 running ok
ok
true
cluster-1-detaches
dd_03 running ok
ok
true
cluster-1-detaches
dd_04 running ok
ok
true
cluster-1-detaches
dd_05 running ok
ok
true
cluster-1-detaches
dd_06 running ok
ok
true
cluster-1-detaches
dd_07 running ok
ok
true
cluster-1-detaches
dd_08 running ok
ok
true
cluster-2-detaches
.
.
.
Transfer
Size
-------2M
2M
2M
2M
2M
2M
2M
2M
2M
Transfer
Size
-------2M
2M
.
.
.
4. Use the set rule-set-name rule-set-name command to set or change its rule-set.
50
Distributed Devices
For example:
VPlexcli:/distributed-storage/distributed-devices/dd_07> set rule-set-name cluster-1-detaches
Value
---------------------false
cluster-1-detaches
Value
--------false
cluster-2-detaches
Rule-sets
51
Distributed Devices
For example:
VPlexcli:/distributed-storage/rule-sets> ds rule-set what-if --islands "cluster-1,cluster-2"
--rule-set cluster-2-detach
IO does not stop.
Value
--------false
cluster-2-detaches
4. Type Yes.
5. Use the ll command to display the change:
VPlexcli:/distributed-storage/distributed-devices/dd_23> ll
Attributes:
Name
---------------------.
.
.
rule-set-name
.
.
.
52
Value
----------------------
Distributed Devices
Copy a rule-set
Use the ds rule-set copy command to copy a rule-set. The syntax for the command is:
ds rule-set copy --source source --destination destination
For example:
VPlexcli:/distributed-storage/rule-sets> ll
Name
PotentialConflict UsedBy
------------------ ----------------- ---------------------------------------TestRuleSet
false
.
.
.
VPlexcli:/distributed-storage/rule-sets> rule-set copy --source TestRuleSet --destination
CopyOfTest
VPlexcli:/distributed-storage/rule-sets> ll
Name
PotentialConflict UsedBy
------------------ ----------------- ---------------------------------------CopyOfTest
false
TestRuleSet
false
.
.
Delete a rule-set
Use the ds dd destroy command to delete a specified rule-set:
1. Navigate to the rule-set context:
VPlexcli:/> cd distributed-storage/rule-sets/
53
Distributed Devices
VPlexcli:/distributed-storage/rule-sets> ll
Name
PotentialConflict UsedBy
------------------ ----------------- ---------------------------------------TestRuleSet
false
54
Distributed Devices
Note: To prevent creating distributed devices with unusable leftover storage, total
capacities of the selected storage at both clusters should be identical.
2. Navigate to the storage-volumes context on the second cluster and repeat step 1 to
create one or more extents to be added to the distributed device:
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes> cd /clusters
/cluster-2/storage-elements/storage-volumes/
VPlexcli:/clusters/cluster-2/storage-elements/storage-volumes> extent create
VPD83T3:60000970000192601852533030424238,VPD83T3:600009700001926 01852533030424243
3. Navigate to the storage-volume context for each of the extented storage volumes.
4. Use the ll command to display the amount of free space and the largest free chunk
size:
VPlexcli:/> cd clusters/cluster-1/storage-elements/storage-volumes/
VPD83T3:60000970000192601852533030414234
VPlexcli:/clusters/cluster-1/storage-elements/storage-volumes/VPD83T3:60000970000192601852533
030414234> ll
Name
Value
---------------------- ------------------------------------------------------application-consistent false
block-count
4195200
block-size
4K
capacity
16G
description
free-chunks
[]
health-indications
[]
health-state
ok
55
Distributed Devices
io-status
itls
largest-free-chunk
locality
operational-status
storage-array-name
storage-volumetype
system-id
thin-rebuild
total-free-space
use
used-by
vendor-specific-name
alive
0x5000144240014720/0x50000972081cf15d/64,
0x5000144240014720/0x50000972081cf165/64,
0x5000144240014730/0x50000972081cf165/64,
0x5000144240014722/0x50000972081cf158/64,
0x5000144240014732/0x50000972081cf158/64,
0x5000144240014730/0x50000972081cf15d/64,
0x5000144240014732/0x50000972081cf160/64,
0x5000144240014722/0x50000972081cf160/64
0B
ok
EMC-SYMMETRIX-192601707
normal
VPD83T3:60000970000192601852533030414234
false
0B
used
[extent_Symm1852_AB4_1]
EMC
Use
------claimed
used
used
used
claimed
claimed
/clusters/cluster-2/storage-elements/extents:
Name
StorageVolume
Capacity
----------------------------------- -------------------------- -------extent_CX4_logging_2_1
CX4_logging_2
80G
extent_CX4_Test_Lun_5_1
CX4_Test_Lun_5
10G
extent_CX4_Test_Lun_6_1
CX4_Test_Lun_6
10G
extent_CX4_Test_Lun_9_1
CX4_Test_Lun_9
10G
extent_Cluster_2_VMware_Datastore_1 Cluster_2_VMware_Datastore 200G
.
.
.
Use
------claimed
claimed
claimed
claimed
claimed
2. Use the local-device create command to create a local device with the specified name.
The syntax for the local-device create command is:
local-device create --name name --geometry geometry --extents
extents --stripe-depth depth
--name - Name for the new device. Must be unique across all clusters. Devices on
different clusters that have the same name cannot be combined into a distributed
device.
56
Distributed Devices
--geometry - Geometry for the new device. Valid values are raid-0, raid-1, or
raid-c.
--extents - List of pathnames of claimed extents to be added to the device, separated
by commas. Can also be other local devices (to create a device of devices).
--stripe-depth - Required for devices with a geometry of raid-0. Specifies the stripe
depth in 4 K byte blocks. The resulting stripe is sized using the following formula:
<stripe-depth> * <the block size on the source storage extents>
In the following example, the ll command displays the available (claimed) extents,
and the local-device create command is used to create a 16 GB RAID 1 device on
cluster-1:
VPlexcli:/clusters/cluster-1/storage-elements/extents> ll
Name
StorageVolume Capacity Use
--------------------- ------------- -------- ------.
.
.
extent_Symm1852_AAC_1 Symm1852_AAC
16G
claimed
extent_Symm1852_AB0_1 Symm1852_AB0
16G
claimed
extent_Symm1852_AB4_1 Symm1852_AB4
16G
claimed
extent_Symm1852_AB8_1 Symm1852_AB8
16G
claimed
4. Use the local-device create command to create a local device with the same capacity.
In the following example, the ll command displays the available (claimed) extents,
and the local-device create command is used to create a 16 GB RAID 1 device on
cluster-2:
VPlexcli:/clusters/cluster-2/storage-elements/extents> ll
Name
StorageVolume
----------------------------------- -------------------------.
.
.
extent_Symm1852_BB8_1
Symm1852_BB8
extent_Symm1852_BBC_1
Symm1852_BBC
extent_base_volume_1
base_volume
extent_base_volume_2
base_volume
Capacity
--------
Use
-------
16G
16G
2G
2G
claimed
claimed
used
used
57
Distributed Devices
5. Return to the root context and use the ll **/devices/ command to display the new
devices on both clusters:
VPlexcli:/clusters/cluster-2/storage-elements/extents> cd
VPlexcli:/> ll -p **/devices
/clusters/cluster-1/devices:
Name
Operational
Virtual
--------------- Status
Volume
--------------- ------------------TestDevCluster1 ok
base0
ok
ok
base1
ok
ok
base2
ok
ok
base3
ok
ok
/clusters/cluster-2/devices:
Name
Operational
Virtual
--------------- Status
Volume
--------------- -------------------TestDevCluster2 ok
base01
ok
ok
base02
ok
ok
base03
ok
ok
Health
Block
Block
Capacity
Geometry
Visibility
Transfer
State
Count
Size
--------
--------
----------
Size
------
-------
-----
--------
--------
----------
--------
ok
4195200 4K
16G
262144 4K
1G
262144 4K
1G
262144 4K
1G
262144 4K
1G
raid-1
raid-0
raid-0
raid-0
raid-0
local
local
local
local
local
2M
base0_vol
base1_vol
base2_vol
base3_vol
Health
Block
Block
Capacity
Geometry
Visibility
Transfer
State
Count
Size
--------
--------
----------
Size
------
-------
-----
--------
--------
----------
--------
ok
8390400
524288 4K
524288 4K
524288 4K
4K
2G
2G
2G
32G
raid-0
raid-0
raid-0
raid-c
local
local
local
local
-
base01_vol
base02_vol
base03_vol
The resulting distributed device will be only as large as the smaller local device.
To create a distributed device without wasting capacity, choose local devices on each
cluster with the same capacity.
If there is pre-existing data on a storage-volume, and the storage-volume is not claimed
as being application-specific, converting an existing local RAID device to a distributed
RAID will NOT initiate a rebuild to copy the data to the other leg. Data will exist at only one
cluster.
To prevent this, do one of the following:
58
Create a single-legged RAID 1 or RAID 0 and add a leg using the device attach-mirror
command.
Distributed Devices
To create a distributed device from two local devices with the same capacity:
1. Use the ll --p ** /devices command to display the available (no virtual volume
configured) local devices.
VPlexcli:/> ll -p **/devices
/clusters/cluster-1/devices:
Name
Operational
Virtual
--------------- Status
Volume
--------------- ------------------TestDevCluster1 ok
base0
ok
ok
base1
ok
ok
base2
ok
ok
base3
ok
ok
/clusters/cluster-2/devices:
Name
Operational
Virtual
--------------- Status
Volume
--------------- -------------------TestDevCluster2 ok
base01
ok
ok
base02
ok
ok
base03
ok
ok
Health
Block
Block
Capacity
Geometry
Visibility
Transfer
State
Count
Size
--------
--------
----------
Size
------
-------
-----
--------
--------
----------
--------
4K
1G
1G
1G
1G
16G
ok
4195200
262144 4K
262144 4K
262144 4K
262144 4K
raid-1
local
raid-0
local
raid-0
local
raid-0
local
raid-0
local
-
2M
base0_vol
base1_vol
base2_vol
base3_vol
Health
Block
Block
Capacity
Geometry
Visibility
Transfer
State
Count
Size
--------
--------
----------
Size
------
-------
-----
--------
--------
----------
--------
4K
2G
2G
2G
16G
ok
4195200
524288 4K
524288 4K
524288 4K
raid-0
raid-0
raid-0
raid-1
local
local
local
local
-
2M
base01_vol
base02_vol
base03_vol
--name <name> - The name must be unique across the entire VPLEX configuration.
-- devices - List of pathnames to local devices to add to the distributed device. Select
devices that have the same capacities. Separate entries in the list by commas.
--logging-volume - List of pathnames to one or more logging volumes to use with the
new device.
If no logging volume is specified, a logging volume is automatically selected from any
available logging volume that has sufficient space for the required entries. If no
available logging volume exists, an error message is returned. SeeLogging volumes
on page 42.
--rule-set - Attaches the specified rule-set to the device. If no rule-set is specified, the
cluster that is local to the management server is assumed to be the winner in the
event of an inter-cluster link outage. See Manage rule-sets on page 48.
In the following example, the ds dd create command creates a distributed device, and
the default rule-set behavior is applied to the new device:
VPlexcli:/> ds dd create --name TestDevice --devices
/clusters/cluster-1/devices/TestDevCluster1,/clusters/cluster-2/devices/TestDevCluster2
59
Distributed Devices
3. Use the ll **/new device name command to display the new distributed device:
VPlexcli:/> ll **/TestDisDevice
/distributed-storage/distributed-devices/TestDevice:
Attributes:
Name
---------------------application-consistent
auto-resume
block-count
block-size
capacity
clusters-involved
geometry
.
.
.
Value
---------------------false
4195200
4K
16G
[cluster-1, cluster-2]
raid-1
Auto
Resume
----------true
true
true
Transfer
Size
--------------2M
2M
2M
2M
2. Use the virtual volume create command to create a virtual volume on a specified
distributed device.
The syntax for the command is:
virtual-volume create --device device --set-tier {1|2}
60
Distributed Devices
For example:
VPlexcli:/> virtual-volume create --device
/distributed-storage/distributed-devices/TestDevice --set-tier 1
3. Navigate to the new virtual volumes context, and use the ll command to display its
attributes:
VPlexcli:/> cd clusters/cluster-1/virtual-volumes/TestDevice_vol/
VPlexcli:/clusters/cluster-1/virtual-volumes/TestDevice_vol> ll
Name
Value
------------------ ----------------block-count
4195200
block-size
4K
cache-mode
synchronous
capacity
16G
consistency-group
expandable
false
health-indications []
health-state
ok
locality
distributed
operational-status ok
scsi-release-delay 0
service-status
unexported
storage-tier
1
supporting-device
TestDevice
system-id
TestDevice_vol
volume-type
virtual-volume
Note: Virtual volume names are assigned automatically based on the device name and a
sequential virtual volumes number.
VPlexcli:/> ll **//LicoJ010
/clusters/cluster-2/exports/storage-views/LicoJ010:
Name
Value
------------------------ -----------------------------------------------------controller-tag
initiators
[LicoJ010_hba0, LicoJ010_hba1, LicoJ010_hba2, LicoJ010_hba3]
operational-status
ok
port-name-enabled-status [P000000003CA000E6-A0-FC00,true,ok,
P000000003CA000E6-A1-FC00,true,ok,
P000000003CA001CB-A0-FC00,true,ok,
P000000003CA001CB-A1-FC00,true,ok,
P000000003CB000E6-B0-FC00,true,ok,
P000000003CB000E6-B1-FC00,true,ok,
P000000003CB001CB-B0-FC00,true,ok,
P000000003CB001CB-B1-FC00,true,ok]
ports
[P000000003CA000E6-A0-FC00, P000000003CA000E6-A1-FC00,
61
Distributed Devices
virtual-volumes
.
.
.
P000000003CA001CB-A0-FC00, P000000003CA001CB-A1-FC00,
P000000003CB000E6-B0-FC00, P000000003CB000E6-B1-FC00,
P000000003CB001CB-B0-FC00, P000000003CB001CB-B1-FC00]
(0,base01_vol,VPD83T3:6000144000000010a000e68dc5f76188,2G),
(1,dd_00_vol,VPD83T3:6000144000000010a0014760d64cb21f,128G),
(2,dd_01_vol,VPD83T3:6000144000000010a0014760d64cb221,128G),
(3,dd_02_vol,VPD83T3:6000144000000010a0014760d64cb223,128G),
3. Use the export storage-view addvirtual volume command to add the virtual volume to
the storage view.
The syntax for the command is:
export storage-view addvirtualvolume --view <storage-view>
--virtual-volumes <virtual-volumes> --force
--view - Context path of the storage view to which to add the specified virtual volume.
--virtual-volumes - List of virtual volumes or LUN-virtual-volume pairs. Mixing of virtual
volumes and LUN-virtual-volume pairs is allowed. If only virtual volumes are specified,
the LUN is automatically assigned. Entries must be separated by commas.
--force - Required to expose a distributed devices volume to more than one host.
For example:
VPlexcli:/> export storage-view addvirtualvolume --view LicoJ009 --virtual-volumes
TestDisDevice_vol/
If the virtual volume has already been exposed to a host, a error message appears:
VPlexcli:/> export storage-view addvirtualvolume -view lsca5230view --virtual-volume
ExchangeDD_vol --force
WARNING: Volume 'ExchangeDD_vol' is already assigned to view 'lsca3207view'
4. Re-scan the disks on each host to ensure that each host can access the virtual volume.
2. Use the set visibility global command to set the devices visibility to global:
VPlexcli:/clusters/cluster-1/devices/base0> set visibility global
Value
--------false
Distributed Devices
.
.
.
transfer-size
virtual-volume
visibility
.
.
.
base0_vol
global
4. Use the export storage-view addvirtualvolume command to expose the virtual volume
to the remote host.
For example:
VPlexcli:/clusters/cluster-1//devices/base0> export storage-view addvirtualvolume --view
E_209_view --virtual-volumes Symm1254_7BF_1_vol
5. Re-scan the disks on each host to ensure that each host can access the virtual volume.
Note: When a local-device is exported, it is automatically assigned the rule-set for its
enclosing cluster.
Use
------claimed
used
used
used
used
claimed
claimed
3. Identify one or more claimed extents whose combined capacity matches the
distributed device.
4. Use the local-device create command to create a device of the same capacity as the
distributed device.
For example:
Add a local mirror to distributed device
63
Distributed Devices
5. Use the device attach-mirror command to attach the new device to the local (same
cluster as the current context) leg of the distributed device. The syntax for the
command is:
device attach-mirror --device <device> --mirror <mirror> --rule-set
<rule-set> --force
-- device - Name or context path of the device to add the mirror to. The target device
must not have a virtual volume configured. If the name of a device is used, verify that
the same device name is not used by any local-device in a different cluster.
--mirror - Name or context path of the device to add as a mirror. It must be a top-level
device. Verify that the same device name is not used by any local-device in a different
cluster.
--force - Forces a rule-set with a potential conflict to be applied.
--rule-set - The rule-set applied to the device.
If the --rule-set option is omitted, a default rule-set is assigned as follows:
If the parent device has a volume, the device inherits the rule-set of the parent.
If the parent device does not have a volume, the cluster that is local to the
management server is the winner.
Note: The VPLEX system displays a message indicating which rule-set has been assigned.
For example:
VPlexcli:/clusters/cluster-2> device attach-mirror --device
/clusters/cluster-2/devices/TestDevCluster2 --mirror
TestDevCluster2 Mirror
[-d|--device] context path or name - * Name or context path of the device from which to
detach the mirror. Does not have to be a top-level device. If the device name is used, verify
that the name is unique throughout the VPLEX, including local devices on other clusters.
64
Distributed Devices
[-m|--mirror] context path or name - * Name or context path of the mirror to detach. Does
not have to be a top-level device. If the device name is used, verify that the name is
unique throughout the VPLEX, including local devices on other clusters.
[-s|--slot] slot number - Optional argument. Slot number of the mirror to be discarded.
Applicable only when the --discard argument is used.
[-i|--discard] - Optional argument. When specified, discards the mirror to be detached. The
data is not discarded.
[-f|--force] -Force the mirror to be discarded. Must be used when the --discard argument is
used.
For example:
VPlexcli:/clusters/cluster-2> device detach-mirror --device
/clusters/cluster-2/devices/TestDevCluster2 --mirror /clusters/cluster-2/devices/
TestDevCluster2Mirror
Detached mirror TestDevCluster2Mirror.
Mirror TestDevCluster2Mirror is below /clusters/cluster-2/devices.
The mirror is removed from the cluster, but the distributed device is left intact and
functional.
Adding a mirror using this method expands the local device into a distributed device
without impacting host I/O.
To add a remote mirror to an exported volume:
1. Use the local-device create command to create a local device on the remote cluster.
For example:
VPlexcli:/clusters/cluster-2> local-device create --name RemoteMirrorCluster2 --geometry
raid-0 --extents extent_Symm1254_7BC_3, extent_Symm1254_7BE_3 --stripe-depth 1
2. Use the device attach-mirror command to attach the local device on the remote cluster
to the leg on local cluster used as the basis for the distributed device:
VPlexcli:/clusters/cluster-1/devices> device attach-mirror --device
/clusters/cluster-1/devices/SourceVolumeCluster1 --mirror
/clusters/cluster-2/devices/RemoteMirrorCluster2
The device on the remote cluster is added as a mirror to the exported volume on the
local cluster.
A rebuild is automatically started to synchronize the two devices.
Note: Use the rebuild status command to monitor the rebuild.
65
Distributed Devices
While the rebuild is in progress, I/O on the volume being mirrored continues without
host access being affected.
Set rebuild-allowed to true to start or resume a rebuild if the mirror legs are out of
sync.
Use the ll command to display detailed information about the device, including its
rebuild-allowed setting:
VPlexcli:/distributed-storage/distributed-devices/TestDevDevice> ll
Attributes:
Name
---------------------.
.
.
rebuild-allowed
Value
----------------------
true
2. Use the set rebuild-allowed true command to allow automatic rebuilds after a failed
inter-cluster link has been restored:
VPlexcli:/distributed-storage/distributed-devices/TestDevDevice> set rebuild-allowed true
66
Distributed Devices
2. Use the set rebuild-allowed false command to prevent automatic rebuilds after a
failed inter-cluster link has been restored:
VPlexcli:/distributed-storage/distributed-devices/TestDevDevice> set rebuild-allowed false
By default, the cluster local to the management server used to create a distributed device
is the winner in the event of a network outage.
Rule-sets determine which cluster wins when a network outage occurs.
Use the set auto-resume command to determine what happens at the losing cluster after
the link is restored.
Use the device resume-link-up command to manually resume I/O on the losing cluster
when the auto-resume flag is false.
Use the device resume-link-down command to manually resume I/O on a suspended
volume while the link is down.
For example:
VPlexcli:/cd distributed-storage/distributed-devices/TestDevDevice
VPlexcli:/distributed-storage/distributed-devices/TestDevDevice> set auto-resume true
VPlexcli:/distributed-storage/distributed-devices/TestDevDevice> ll
Attributes:
Name
---------------------application-consistent
auto-resume
.
.
.
Value
---------------------false
true
67
Distributed Devices
Be careful not to introduce a conflicted detach by allowing both legs of distributed
devices to independently resume I/O.
The syntax for the command is:
device resume-link-down {--all-at-island|--cluster <context-path>|--devices
<context-path,context-path>} --force
--all-at-island - Resumes I/O on all devices on the chosen winning cluster and the clusters
with which it is in communication.
68
Distributed Devices
--cluster - Context path to a cluster. Resumes I/O on the specified cluster and the clusters
it is in communication with during a link outage. Necessary only when the all-at-island flag
or distributed devices are specified. Not required for local devices with global visibility.
--devices - List of context paths to one or more top-level devices, separated by commas.
Resumes I/O for the specified devices.
--force - Forces I/O to resume.
For example:
VPlexcli:/distributed-storage/distributed-devices> device
resume-link-down --all-at-island --cluster --devices DD_5d --force
69
Distributed Devices
Virtual Volume
RAID - 1
Mirror leg 1
Mirror leg 2
Writes need to complete
to both mirror legs and
storage volumes
Storage Volume 1
on Array 1
Storage Volume 2
on Array 2
VPLX-000547
Storage volumes may experience performance degradation due to issues on its back-end
I/O path, and exhibit this condition in the form of timeouts. When the number of timeouts
exceed the acceptable threshold during a given time frame, the storage volume is
considered degraded. If such a storage volume supports a mirror leg in a RAID-1 device, it
brings down the overall RAID-1 performance. In such situations, VPLEX automatically
prevents I/Os to the poorly performing mirror leg in order to improve the overall RAID- 1
performance. This mechanism is called Auto mirror isolation. The performance of a RAID-1
device is improved without causing data unavailability, that is, the last up to date leg of a
RAID-1 device is never isolated. Therefore, access to data is never interrupted through
isolation, even if the last leg is based upon a degraded storage volume. While auto mirror
isolation improves RAID-1 performance, it impacts active mirroring and interrupts
migration.
Note: Mirror isolation does not impact replication in the case of local RAID-1 devices.
However, for distributed volumes, if a mirror leg is isolated on the replicating site,
replication is disrupted as no I/Os are being sent to the site.
70
Distributed Devices
Geometries supported
The following geometries are impacted:
Multiple degraded storage volumes on the same array indicating a possible array
issue
Multiple degraded storage volumes as seen from multiple directors on multiple arrays,
which might point to a backend SAN issue
Multiple degraded storage volumes as seen from a single director, which might point
to director's Initiator-Target connection at fault
71
Distributed Devices
Virtual Volume
RAID - 1
Mirror isolation
Mirror leg 1
Mirror leg 2
I/Os complete only on
Mirror leg 1 and Storage
Volume 1
Storage Volume 1
on Array 1
Storage Volume 2
on Array 2
VPLX-000548
Mirror isolation
VPLEX periodically (every one minute) monitors if there are any storage volumes that have
been marked degraded. If VPLEX detects degraded storage volumes that support a RAID-1
mirror, VPLEX isolates the mirror without causing data unavailability.
Figure 4shows the mirror isolation process.
72
Distributed Devices
Start
Are degraded
SVs present?
Yes
Do degraded
SVs support RAID-1
mirror?
Yes
No (Stops
the process)
Stop
Mirror un-isolation
VPLEX provides the ability to automatically or manually un-isolate a mirror leg.
73
Distributed Devices
Start
Yes
Is the SV supporting
the mirror healthy?
Yes
No (Stops
the process)
Stop
Distributed Devices
To enable auto mirror isolation, use the device mirror-isolation enable command.
To disable auto mirror isolation, use the device mirror-isolation disable command.
The VPLEX CLI Guide provides information on the commands and their usage.
75
Distributed Devices
76
CHAPTER 5
Provisioning Storage
This chapter describes the following topics:
Provisioning Overview.............................................................................................
About VPLEX integrated storage provisioning ..........................................................
Provisioning storage using VIAS ..............................................................................
Provisioning storage using EZ provisioning ..............................................................
Provisioning storage using advanced provisioning...................................................
77
77
78
84
84
Provisioning Overview
To begin using VPLEX, you must provision storage so that hosts can access that storage.
There are three ways to provision storage on VPLEX.
EZ provisioning
Advanced provisioning
Note: EMC recommends using the VPLEX Unisphere GUI to provision storage.
Provisioning Storage
77
Provisioning Storage
For information on the supported SMI-S provider version and the array profile number, see
EMC Simplified Support Matrix for VPLEX.
78
Provisioning Storage
79
Provisioning Storage
The Unisphere for VPLEX Online Help provides more information on using the GUI.
The VPLEX CLI Guide provides more information on the commands and their usage.
80
Provisioning Storage
If the array management providers' IP address change, you must unregister and
register the array management provider again.
Pool listing
The array management provider does not list all the pools from an array. Only the pools
that are unrestricted and are capable of provisioning LUNs are listed. For example,
primordial pools and restricted pools are not listed because they cannot have LUNs
directly provisioned from them.
Note: Listing pools is expensive on the SMI-S provider. While a provisioning request is in
progress, try to limit the number of times you list pools. This request can be synchronous
and can cause delays in the response time for listing pools.
81
Provisioning Storage
The Unisphere for VPLEX Online Help provides information on creating virtual volumes
using the GUI.
Provisioning jobs
Provisioning requests are executed as jobs. Because provisioning takes time to complete,
provisioning requests are executed in an asynchronous manner. When a provisioning
request is executed in the CLI, a job ID is returned. The provisioning job and its status are
tracked by the job ID. The GUI has a provisioning jobs page that displays the status of a
provisioning request. A job can be in progress, complete, or in failure. Regardless of the
number of LUNs provisioned, the rediscovery of the array happens only once per array
after all the LUNs have been provisioned and exposed to VPLEX.
82
Provisioning Storage
Note: During a provisioning request, it might take several minutes for the job ID to be
returned. In the GUI, the provisioning wizard will display an in progress icon until the job
ID is ready. The same is applicable for the CLI, it might take several minutes for the job ID
to be displayed.
Note: Provision requests timeout after 120 minutes of inactivity.
Note: Jobs are not persisted. If the SMS fails or if its rebooted, the jobs and their status are
not restored. The jobs status is not restored even during an SMS upgrade or a restart of the
VPLEX CLI or the VPLEX management console.
Note: Jobs are read-only and cannot be started or stopped. Any job regardless of the state
is removed after 48 hours.
Table 7 describes the provisioning job status.
Table 7 Provisioning job status
Operation
In progress
Completed
Failure
Pre-check
Provision volumes
Provisioning X storage
volume(s) on storage
array(s)
Successfully provisioned
X storage volume(s) on
the storage array(s).
Rediscover arrays
Rediscovering newly
exposed volume(s) on the
storage array(s)
Successfully rediscovered
the newly exposed
volume(s) on the storage
array(s)
Encapsulate
Creating X virtual
volume(s).
Successfully created X
virtual volume(s)
83
Provisioning Storage
Note: De-provisioning from the array can take several minutes to complete.
The result of the storage-volume unclaim operation depends on how the storage volume is
provisioned. When a VPLEX integrated array service is used to provision a virtual volume, a
new attribute named vias-based is used to determine whether the storage volume is
created from a pool. The vias-based attribute is set to true for VIAS based provisioning.
When the storage-volume unclaim command is executed with a mix of storage volumes,
the result is different for VPLEX integrated array service based storage volumes. For
instance, a VPLEX integrated array service based storage volume is removed from VPLEX,
but other types of storage volumes are only marked as unclaimed and not removed. By
default, the storage volume unclaim command only removes the storage from VPLEX view.
It does not delete the LUN on the back end array.
Note: When the storage-volume unclaim command is executed with the r option, the CLI
operation might take a while to complete. This is because the storage-volume unclaim r
command waits for the delete action on the array to be complete.
Provisioning Storage
Additional documentation
See the VPLEX Procedure Generator for information on array configuration and best
practices; and for provisioning failure and troubleshooting information.
See the Unisphere VPLEX Online Help for information on using the GUI to provision
storage.
See the VPLEX CLI Guide for information on provisioning related commands and their
usage.
See the EMC Simplified Support Matrix for VPLEX for information on the supported
arrays, and AMPs.
85
Provisioning Storage
86
CHAPTER 6
Volume expansion
This chapter describes the following topics:
Overview................................................................................................................. 87
Determine volume expansion-method..................................................................... 87
Expand the virtual volume....................................................................................... 89
Overview
A VPLEX virtual volume is created on a device or a distributed device, and is presented to a
host through a storage view. For a number of reasons, you may want to expand the
capacity of a virtual volume.
If the volume supports expansion, VPLEX detects the capacity gained by expansion.
Then, you determine the available expansion method: either storage-volume (the
preferred method) or concatenation (RAID-C expansion). VPLEX can also detect the
available expansion method.
Not all virtual volumes can be expanded. See Determine volume expansion-method on
page 87 for more details.
If the volume supports expansion, VPLEX detects the capacity gained by expansion. Then,
you determine the available expansion method: either storage-volume (the preferred
method) or concatenation (RAID-C expansion).
You perform volume expansion using a simple, non-disruptive procedure:
1. Expand the storage volume associated with the virtual volume on the underlying
storage array.
2. Allow VPLEX to rediscover the underlying storage array.
3. Expand the virtual volume using the CLI or GUI.
Additional documentation
Unisphere for VPLEX Online Help - Use the GUI to expand the virtual-volume.
Volume expansion
87
Volume expansion
concatenation (or RAID-C expansion) - The virtual volume is expanded by adding only
specified extents or devices, as required.
not supported - VPLEX cannot expand the virtual volume because the volume did not
meet one or more prerequisites. See Limitations on page 93 for details.
Note the expansion-method attribute value storage-volume, indicating VPLEX will use the
storage-volume method, by default, to expand this virtual volume.
88
Volume expansion
For information about other attributes and how to use them as you expand your virtual
volume, see Expand the virtual volume on page 89.
Overview
The storage volume method of expansion supports simple, fast expansion on a variety of
device geometries. Three of the most common are described here.
89
Volume expansion
In the 1:1 virtual volume to storage volume geometry, the virtual volume is built on a
single extent. The extent is built on a single storage volume.
Dual-legged RAID-1
90
Volume expansion
This geometry is similar to the dual-legged RAID-1, but uses a distributed RAID-1 device
(DR1) versus a RAID-1 device. DR1 devices have physical volumes at both clusters in a
VPLEX Metro or VPLEX Geo configuration for simultaneous active/active read/write access
using AccessAnywhere.
Distributed RAID-1
The virtual volume is a multi-legged RAID-1 or RAID-0 volume, and each of its smallest
extents is mapped 1:1to a back end storage volume.
The virtual volume is a RAID-C (expansion through the last in-use extent or any
following extent, and only through extents 1:1 mapped to storage volumes).
Note: Storage volumes that are not mapped 1:1 onto extents cannot have the virtual
volumes built on them expanded using this command. To expand a virtual volume whose
underlying storage-volume is not mapped 1:1 onto extents, perform an extent migration to
an extent that is 1:1 mapped to a storage volume. Alternatively, migrate to a larger device
and use the virtual-volume expand CLI command to expand the volume to the extra
available capacity.
91
Volume expansion
Expand the virtual volume using the storage volume expansion method.
Volume Expansion
Perform volume expansion using one of the following techniques:
The virtual-volume expand CLI command. Refer to the EMC VPLEX CLI Guide for
detailed information about this command.
Expand a virtual volume using the VPLEX GUI. Refer to the Unisphere for VPLEX Online
Help for complete steps.
Refer to the generator for procedures to expand a distributed virtual volume using
GeoSynchrony:
During volume expansion, using the storage-volume method, keep the following in mind:
Performing a major host operation (such as a LIP reset, for example) in order to detect a
change in volume size presents risk to volumes accessed by the host. It is best to avoid
such resource intensive operations during volume expansion.
92
Expansion initialization traffic occurs on disk areas not performing host I/O. In
addition, the amount of time taken to initialize the newly added capacity depends on
the performance of the array hosting the storage volumes. The expected performance
is still faster, however, than that to rebuild a volume.
Volume expansion
Across distributed RAID-1 devices, the initialization process does not consume WAN
data bandwidth as each cluster performs its initialization locally.
On RAID-1 and distributed RAID-1 devices, VPLEX ensures that all RAID-1 legs have
consistent information on the expanded space.
The newly expanded virtual volume capacity will be available to use by hosts when the
initialization process has finished.
If VPLEX has claimed the storage volumes as thinly provisioned, the initialization
process will not affect the underlying provisioning of the additional capacity reported
to VPLEX.
Limitations
The following are general limitations for expanding virtual volumes.
Some virtual volumes cannot be expanded under specific circumstances or at all. Volumes
can not be expanded if any of the following conditions are true:
93
Volume expansion
For virtual volumes built on RAID-1 or distributed RAID-1 devices, a maximum of 1000
initialization processes can run concurrently per cluster. If this limit is reached on a
cluster, then no new expansions can be started on virtual volumes with these
geometries until some of the previously started initialization processes finish on that
cluster.
Virtual volumes not containing RAID-1 or distributed RAID-1 devices are not affected by
this limitation.
94
Volume expansion
Array re-discoveries may consume excessive resources and can be disruptive to I/O.
Re-discover arrays only when necessary.
Best Practice
Before selecting extents and local devices to expand a virtual volume using
concatenation, ensure the following:
95
Volume expansion
96
CHAPTER 7
Data migration
This chapter describes the following topics:
One time migrations - Begin an extent or device migration immediately when the dm
migration start command is used.
Batch migrations - Are run as batch jobs using re-usable migration plan files. Multiple
device or extent migrations can be executed using a single command.
Extent migrations - Extent migrations move data between extents in the same cluster.
Use extent migrations to:
Move extents from a hot storage volume shared by other busy extents
Defragment a storage volume to create more contiguous free space
Perform migrations where the source and target have the same number of volumes
with identical capacities
Device migrations - Devices are RAID 0, RAID 1, or RAID C devices built on extents or on
other devices.
Device migrations move data between devices on the same cluster or between devices
on different clusters. Use device migrations to:
Migrate data between dissimilar arrays
Relocate a hot volume to a faster array
Relocate devices to new arrays in a different cluster
Limitations
Batch migrations
Devices must be removed from consistency groups before they can be migrated.
Batch migrations migrate multiple extents or devices. Create batch migrations to automate
routine tasks:
Data migration
97
Data migration
Use batched extent migrations to migrate arrays within the same cluster where the
source and destination have the same number of LUNs and identical capacities.
Use batched device migrations to migrate to dissimilar arrays (user must configure
the destinations capacities to match the capacity and tier of the source array), and
to migrate devices between clusters in a VPLEX Metro or VPLEX Geo.
Up to 25 local and 25 distributed migrations can be in progress at the same time. Any
migrations beyond those limits are queued until an existing migration completes.
Limitations
General procedure
to perform data
migration
Devices must be removed from consistency groups before they can be migrated.
Use the following general steps to perform extent and device migrations:
1. Create and check a migration plan (batch migrations only).
2. Start the migration.
3. Monitor the migrations progress.
4. Pause, resume, or cancel the migration (optional).
5. Commit the migration. Commit transfers the source virtual volume/device/extent to
the target.
If the virtual volume on top of a device has a system-assigned default name,
committing a device migration renames the virtual volume after the target device.
6. Clean up (optional).
For extent migrations: dismantle the source devices or destroy the source extent and
unclaim its storage-volume.
7. Remove the record of the migration.
Device migrations are not recommended between clusters. All device migrations are
synchronous. If there is I/O to the devices being migrated, and latency to the target
cluster is equal to or greater than 5ms, significant performance degradation may occur.
98
Data migration
About rebuilds
Rebuilds synchronize data from a source drive to a target drive. When differences arise
between legs of a RAID, a rebuild updates the out-of-date leg.
There are two types of rebuild behavior:
A full rebuild copies the entire contents of the source to the target.
A logging rebuild copies only changed blocks from the source to the target.
Local mirrors are updated using a full rebuild (local devices do not use logging volumes).
In VPLEX Metro and Geo configurations, all distributed devices have an associated logging
volume. Logging volumes keep track of blocks written during an inter-cluster link outage.
After a link or leg is restored, the VPLEX system uses the information in logging volumes to
synchronize mirrors by sending only changed blocks across the link.
Logging volume rebuilds also occur when a leg of a disaster recovery RAID 1 (DR1)
becomes unreachable, but recovers quickly.
If a logging volume is unavailable at the time that a leg is scheduled to be marked
out-of-date (via the log), the leg is marked as fully out-of-date, causing a full rebuild.
The unavailability of a logging volume matters both at the time of recovery (when the
system reads the logging volume) and at the time that a write failed on one leg and
succeeded on another (when the system begins writes to the logging volume).
If no logging volume is available, an inter-cluster link restoration will cause a full rebuild
of every distributed device to which there were writes while the link was down.
See Logging volumes on page 42.
Thin provisoning allows storage to migrate onto a thinly provisioned storage volumes
while allocating the minimal amount of thin storage pool capacity.
Thinly provisioned storage volumes can be incorporated into RAID 1 mirrors with similar
consumption of thin storage pool capacity.
VPLEX preserves the unallocated thin pool space of the target storage volume by detecting
zeroed data content before writing, and suppressing the write for cases where it would
cause an unnecessary allocation. VPLEX requires the user to specify thin provisoning for
each back-end storage volume. If a storage volume is thinly provisioned, the "thin-rebuild"
attribute must be to "true" either during or after claiming.
If a thinly provisioned storage volume contains non-zero data before being connected to
VPLEX, the performance of the migration or initial RAID 1 rebuild is adversely affected.
System volumes are supported on thinly provisioned LUNs, but these volumes must have
thin storage pool resources available, at maximum capacity. System volumes must not
compete for this space with user-data volumes in the same pool. If the thin storage
About rebuilds
99
Data migration
allocation pool runs out of space and this is the last redundant leg of the RAID 1, further
writing to a thinly provisioned device causes the volume to lose access to the device,
resulting in data unavailability.
Performance
considerations
To improve overall VPLEX performance, disable automatic rebuilds or modify the rebuild
transfer size:
Disable automatic rebuilds to avoid a flood of activity when re-attaching two clusters.
See Display/enable/disable automatic device rebuilds on page 66
Disabling automatic rebuilds prevents DR1s from synchronizing. Child devices will be out
of date, increasing the likelihood of remote reads.
Modify the rebuild transfer size. See About transfer-size on page 107.
Start a one-time
device or extent
data migration
6. Use the dm migration start command to start a migration. The syntax for the command
is:
100
Data migration
Setting too large a transfer size may result in data unavailability. Only vary from the
default when performance implications are fully understood.
If host I/O activity is high, setting a large transfer size may impact host I/O.
See About transfer-size on page 107.
Monitor a
migrations progress
101
Data migration
Pause/resume a
migration (optional)
Field
Description
from-cluster
percentage-done
source
source-exported
Whether the source device was exported during the migration. Applicable if
the migration is an inter-cluster device migration and the device was not
already exported. Devices are exported to a remote cluster in order to be
visible at that cluster and can be used as a leg in a temporary distributed
RAID 1 during the migration.
false - Source device was not exported.
true - Source device was exported.
start-time
status
target
target-exported
to-cluster
transfer-size
Size of the region in cache used to service the migration. 40 KB-128 MB.
type
Type of rebuild.
full - Copies the entire contents of the source to the target.
logging - Copies only changed blocks from the source to the target.
Active migrations (a migration that has been started) can be paused and then resumed at
a later time.
Pause an active migration to release bandwidth for host I/O during periods of peak traffic.
Use the dm migration pause --migrations migration-name command to pause a migration.
Specify the migration-name by name if that name is unique in the global namespace.
Otherwise, specify a full pathname.
For example:
Data migration
Cancel a migration
(optional)
The migration is in progress or paused. The migration is stopped, and any resources it
was using are freed.
The migration has not been committed. The source and target devices or extents are
returned to their pre-migration state.
Commit a
completed
migration
The migration process inserts a temporary RAID 1 structure above the source
device/extent with the target device/extent as an out-of-date leg of the RAID 1. The
migration can be understood as the synchronization of the out-of-date leg (the target).
After the migration is complete, the commit step detaches the source leg of the RAID 1,
and removes the RAID 1.
The virtual volume, device, or extent is identical to the one before the migration except
that the source device/extent is replaced with the target device/extent.
A migration must be committed in order to be cleaned.
Verify that the migration has completed successfully before committing the migration.
Use the dm migrations commit --force --migrations migration-name command to commit a
migration.
Note: You must use the --force flag to commit a migration.
For example:
103
Data migration
Clean a migration
Device migrations
For device migrations, cleaning dismantles the source device down to its storage volumes.
The storage volumes no longer in use are unclaimed.
For device migrations only, use the --rename-target argument to rename the target device
after the source device. If the target device is renamed, the virtual volume on top of it is
also renamed if the virtual volume has a system-assigned default name.
Without renaming, the target devices retain their target names, which can make the
relationship between volume and device less evident.
Extent migrations
For extent migrations, cleaning destroys the source extent and unclaims the underlying
storage-volume if there are no extents on it.
Use the dm migration clean --force --migrations migration-name command to clean a
migration.
Specify the migration-name by name if that name is unique in the global namespace.
Otherwise, specify a full pathname.
For example:
Remove migration
records
Batch migrations
Batch migrations are run as batch jobs from reusable batch migration plan files. Migration
plan files are created using the create-plan command.
A single batch migration plan can be either for devices or extents, but not both.
104
Data migration
Retire storage arrays (off-lease arrays) and bring new ones online
The steps to perform a batch migration are generally the same as those described in the
General procedure to perform data migration on page 98.
There are two additional steps to prepare for a batch migration:
Prerequisites
Create a batch
migration plan
Create a batch migration plan file (using the batch-migrate create-plan command)
Test the batch migration plan file (using the batch-migrate check-plan command)
The source and targets are both devices or extents. Migrations between devices and
extents are not supported.
The structure of the target is the same as the structure of the source.
For extent migrations, both source and target extents must be in the same cluster.
The batch-migrate create-plan command creates a migration plan using the specified
sources and targets. The syntax for the command is:
batch-migrate create-plan --file migration-filename --sources sources --targets targets
--force
--file - Specify the migration-filename filename only if that name is unique in the
global namespace. Otherwise, specify a full pathname.
--force - If a plan file with the same name already exists, forces the old plan to be
overwritten.
Batch migrations
105
Data migration
Capacity of target extent is equal or bigger than the source extent's capacity
Device migrations:
Target device has no volumes on it
Source device has volumes on it
Extent migrations:
Target extent is claimed and ready for use
Source extent is in use
If the migration plan contains errors, a description of the errors is displayed, and the plan
check fails. For example:
VPlexcli:/> batch-migrate check-plan --file MigDev-test.txt
Checking migration plan file /var/log/VPlex/cli/MigDev-test.txt.
Target device '/clusters/cluster-2/devices/dev1723_61C' has a volume.
Target device '/clusters/cluster-2/devices/dev1723_618' has a volume.
Plan-check failed, 2 problems.
Use the steps described in Modify a batch migration file on page 106 to correct the plan.
Repeat the process of check and modify until the batch migration plan passes the plan
check. For example:
VPlexcli:/> batch-migrate check-plan --file migrate.txt
Checking migration plan file /temp/migration_plans/migrate.txt.
Plan-check passed.
Data migration
Use the batch-migrate create-plan command, specify the same filename, and use the
--force option to overwrite the old plan with the new one.
Note: To add comments to the migration plan file, add lines beginning with / .
For VPLEX Metro configurations with narrow inter-cluster bandwidth, set the transfer
size lower so the migration does not impact inter-cluster I/O.
The region specified by transfer-size is locked during migration. Host I/O to or from
that region is held. Set a smaller transfer-size during periods of high host I/O.
Use the batch-migrate start --transfer-size [40K-128M] --file filename command to start the
specified batch migration:
For example:
VPlexcli:/> batch-migrate start --file migrate.txt --transfer-size 2M
Started 4 of 4 migrations.
Batch migrations
107
Data migration
Use the batch-migrate resume --file filename command to resume the specified paused
migration. For example:
VPlexcli:/data-migrations/device-migrations> batch-migrate resume
--file migrate.txt
Note: In order to re-run a canceled migration plan, the batch-migrate remove filename
command must be used to remove the records of the migration. See Remove batch
migration records on page 111.
For example:
VPlexcli:/data-migrations/device-migrations> batch-migrate summary --file migrate.txt
--verbose
sourcesource-site
target
target-cluster
centage-complete eta.
-------------------------------------------- --R20061115_Symm2264_010
1
migrate.txt 100
R20061115_Symm2264_011
1
migrate.txt 100
R20061115_Symm2264_012
1
migrate.txt 100
108
migration-name status
per
----------------------
--------
R20070107_Symm2A10_1B0
R20070107_Symm2A10_1B1
R20070107_Symm2A10_1B2
-
1
1
1
-------------
Data migration
R20061115_Symm2264_0113 1
1
migrate.txt 27
R20070107_Symm2A10_1B3
4.08min
Processed 4 migrations:
committed:
0
complete:
3
in-progress: 1
paused:
0
error:
0
cancelled:
0
no-record:
0
Description
Processed....
Of the number of source-target pairs specified in the batch migration plan, the number that
have been processed.
committed
Of the number of source-target pairs that have been processed, the number that have been
committed.
completed
Of the number of source-target pairs that have been processed, the number that are
complete.
in-progress
Of the number of source-target pairs that have been processed, the number that are in
progress.
paused
Of the number of source-target pairs that have been processed, the number that are
paused.
error
cancelled
Of the number of source-target pairs that have been processed, the number that have been
cancelled.
no-record
Of the number of source-target pairs that have been processed, the number that have no
record in the context tree.
Note: If more than 25 migrations are active at the same time, they are queued, their status
is displayed as in-progress, and percentage-complete is displayed as ?.
Batch migrations
109
Data migration
Commit permanently removes the volumes from the source devices.
For example:
VPlexcli:/> batch-migrate commit --file migrate.txt
110
Data migration
This command must be run before the batch-migration has been removed. The command
will not clean migrations that have no record in the VPlexcli context tree.
In the following example, source devices are torn down to their storage volumes and the
target devices and volumes are renamed after the source device names
VPlexcli:/> batch-migrate clean --rename-targets --file migrate.txt
Using migration plan file /temp/migration_plans/migrate.txt for
cleanup phase.
0: Deleted source extent
/clusters/cluster-1/devices/R20061115_Symm2264_010, unclaimed its
disks Symm2264_010
1: Deleted source extent
/clusters/cluster-1/extents/R20061115_Symm2264_011, unclaimed its
disks Symm2264_011
2: Deleted source extent
/clusters/cluster-1/extents/R20061115_Symm2264_012, unclaimed its
disks Symm2264_012
3: Deleted source extent
/clusters/cluster-1/extents/R20061115_Symm2264_013, unclaimed its
disks Symm2264_013
or:
VPlexcli:>batch-migrate remove /data-migrations/device-migrations
--file migrate.txt.
Batch migrations
111
Data migration
112
CHAPTER 8
Configure the Network
The two WAN ports on each VPLEX director support dual Gigabit Ethernet inter-cluster
links. The WAN ports are configured as part of the installation of a second cluster. This
chapter describes the CLI contexts and procedures to change the configuration created
during installation.
113
114
119
124
125
VS1 - The Wide Area Network (WAN) communication interface card (SLiC) has four 1
Gigabit Ethernet (GbE) ports. Only two ports are used for VPLEX WAN communications.
The ports are named GE00 and GE01.
VS2 - WAN SLiC has two 10 Gigabit Ethernet (10 GbE) ports.
The ports are named XG00 and XG01.
Data carried on WAN ports on both VS1 and VS2 directors and between clusters in VPLEX
Metro and Geo configurations is not encrypted.
To prevent DNS attacks, the WAN ports should be routed only on secure and trusted
networks.
Refer to the EMC Simple Support Matrix (ESSM) for information about encryption devices
supported in VPLEX configurations.
Port groups
All ports named GE00 or XG00 (in a cluster) are collectively referred to as port-group 0.
All ports named GE01 or XG01 (in a cluster) are collectively referred to as port-group 1.
Note: Port group names (port-group-0 and port-group-1) can not be modified.
113
The two WAN ports on a director should be on different physical networks, and must
be on different subnets so that port GE00/XG00 (port group 0) can not see port
GE01/XG01 (port group 1) on any director.
All port GE00/XG00s in the cluster (one from each director) must be in the same
subnet and connected to the same LAN. Ports in the same subnet are usually
connected to the same Ethernet switch.
All port GE01/XG01s must be in one subnet, which cannot be the same subnet used
for ports GE00/XG00.
Each director must have 2 statically assigned IP addresses; one in each subnet.
Each cluster must have an additional statically assigned IP address in each subnet
(cluster IP address). This address is used during discovery. The cluster IP address is
not tied to a specific physical port. Any director may host the cluster IP address.
The management port subnet can not be the same as either subnet used for the WAN
ports.
Sub-contexts
The /clusters/cluster/cluster-connectivity has three sub-contexts:
subnets context
port-groups context
option-sets context
CLI contexts
The parent context for configuring Ethernet and WAN connections is:
/clusters/cluster/cluster-connectivity
The /clusters/cluster/cluster-connectivity context includes the following addresses:
discovery-address - The multicast address local directors use to discover the cluster.
Note: Multicast must be enabled on the local switch connecting the directors Ethernet
ports.
discovery-port - The port local directors use (along with the discovery-address) to find
the other directors in same cluster.
listening-port - The port local directors use to communicate with the other cluster. The
listening port is used when connecting to the directors in the other cluster.
The default values for these three addresses should not be changed. They are used by
the local directors to discover each other within the cluster.
IMPORTANT
The listening port must be open through any fire walls between clusters.
Use the set command to change the three addresses.
114
Use the set command with no arguments to display the allowed values for the three
addresses:
VPlexcli:/clusters/cluster-1/cluster-connectivity> set
attribute
input-description
------------------------ ----------------------------------------------------discovery-address
Takes w.x.y.z where w,x,y,z are in [0..255]
e.g. 10.0.1.125.
discovery-port
Takes an integer between 1024 and 32767.
listening-port
Takes an integer between 1024 and 32767.
name
Read-only.
remote-cluster-addresses Read-only.
These are the cluster addresses assigned to the other cluster, both the one in the
port-group-0 and the one in port-group-1. There are exactly 2.
To change a remote address, you must first clear the remote address.
Use the remote-clusters clear-addresses and remote-clusters add-addresses commands
to add or clear entries in this list.
For example, to change address 42.29.20.214 to 42.29.20.254:
VPlexcli:/clusters/cluster-1/cluster-connectivity> remote-clusters clear-addresses
--remote-cluster cluster-2 --addresses 42.29.20.214:11000
VPlexcli:/clusters/cluster-1/cluster-connectivity> remote-clusters add-addresses
--remote-cluster cluster-2 --addresses 42.29.20.254:11000
Alternatively, use the --default argument to create a default list of reachable IP addresses
(using the cluster-address attribute of the active subnets of remote clusters) for all remote
clusters).
VPlexcli:/clusters/cluster-1/cluster-connectivity> remote-clusters add-addresses --default
Default values are determined by the cluster-address attribute of the active subnets from
all remote clusters. For example:
remote-cluster-addresses cluster-2
[192.168.91.252:11000,192.168.101.252:11000]
subnets context
A subnet is a logical subdivision of an IP network. VPLEX IP addresses are logically divided
into two fields:
CLI contexts
115
On a VPLEX, the prefix attribute is really a prefix and subnet mask. specified as an IP
address and subnet mask in integer dot notation, separated by a colon.
For example: 192.168.20.0:255.255.255.0
IMPORTANT
VPLEX subnet addresses must be consistent, that is the cluster address and the gateway
address must be in the subnet specified by the prefix.
If a change is made to the subnet, the change is validated and applied to all ports using
this subnet.
When re-configuring a port-group, there are multiple values that must be consistent with
each other. It may be necessary to clear or erase some attribute values before others can
be changed.
VPlexcli:/clusters/cluster-1/cluster-connectivity> cd subnets/
VPlexcli:/clusters/cluster-1/cluster-connectivity/subnets> ll
Name
-------------cluster-1-SN00
cluster-1-SN01
default-subnet
Use the following 4 CLI commands to create, modify, and delete subnets:
subnet clear
subnet create
subnet destroy
subnet modify
port-groups context
Ports named GE00/XG00 on each cluster are collectively referred to as port-group-0. There
are two port-group-0s, one in each cluster. The port-group-0s on each cluster form one
communication channel between the clusters.
Ports named GE01/XG01 on each cluster are collectively referred to as port-group-1. There
are two port-group-1s, one in each cluster. The port-group-1s on each cluster form a
second communication channel between the clusters.
The number of ports in each port-group varies depending on the number of engines in
each cluster.
In the following example, a VPLEX Geo configuration has 1 engine in each cluster:
VPlexcli:/clusters/cluster-1/cluster-connectivity> cd port-groups/
VPlexcli:/clusters/cluster-1/cluster-connectivity/port-groups> ll
Name
Subnet
Option Set
Enabled
Member Ports
------------ -------------- --------------- ---------------------------------------------------port-group-0 cluster-1-SN00 optionset-com-0 all-enabled
engine-1-1|A2-XG00|192.168.11.140|enabled,
engine-1-1|B2-XG00|192.168.11.142|enabled
116
all-enabled
engine-1-1|B2-XG01|192.168.12.142|enabled
member-ports - A read-only list of ports that are part of this port-group, including their
subnet, option-set, address and owner engine and director.
option-sets context
Option-sets group connection properties so that they can be collectively applied to the
ports contained in a port-group.
Option sets include the following properties:
keepalive-timeout - The time in seconds to keep a connection open while it's idle.
Default: 10 seconds.
Range: 5 - 20 seconds.
CLI contexts
117
See Optimize performance by tuning socket buffer size on page 118 for more
information.
Consult EMC Customer Support before changing the connection-open-timeout and/or
keepalive-timeout.
Use the set command to modify one or more properties of an option set:
VPlexcli:/clusters/cluster-1/cluster-connectivity/option-sets/optionset-com-1>
connection-open-timeout 3s
VPlexcli:/clusters/cluster-1/cluster-connectivity/option-sets/optionset-com-1>
keepalive-timeout 10s
VPlexcli:/clusters/cluster-1/cluster-connectivity/option-sets/optionset-com-1>
socket-buf-size 10M
VPlexcli:/clusters/cluster-1/cluster-connectivity/option-sets/optionset-com-1>
Name
Value
----------------------- ----connection-open-timeout 3s
keepalive-timeout
10s
socket-buf-size
10M
set
set
set
ll
socket-buf-size
1 MB
5 MB
118
Configuration
socket-buf-size
MTU 1500
15 MB
MTU 9000
20 MB
An inter-cluster link outage will occur if a port-group is disabled when there are missing
connections through the other port-group.
1. Verify connectivity.
Use the connectivity validate-wan-com command to verify that all directors have complete
connectivity through the other port group.
VPlexcli:/> connectivity validate-wan-com
connectivity: FULL
port-group-1 - OK - All expected connectivity is present.
port-group-0 - OK - All expected connectivity is present.
119
Required argument:
[-s|--subnet] subnet - Context path of the subnet configuration to modify.
Optional arguments:
[-a|--cluster-address] address - The public address of the cluster to which this subnet
belongs.
[-g|--gateway] IP address - The gateway address for this subnet.
[-p|--prefix] prefix - The prefix/subnet mask for this subnet. Specified as an IP address
and subnet mask in integer dot notation, separated by a colon. For example,
192.168.20.0:255.255.255.0
To modify the subnets public IPv4 address:
VPlexcli:/clusters/cluster-1/cluster-connectivity/subnets/cluster-1-SN01> subnet modify
--subnet cluster-1-SN01 --cluster-address 192.168.12.200
If the prefix is changed, ensure that the cluster IP address, gateway address, and port IP
addresses are all consistent with the subnet prefix.
Use the ll command to display the addresses of the ports in the target port-group:
VPlexcli:/> ll /clusters/cluster-1/cluster-connectivity/port-groups/port-group-1
/clusters/cluster-1/cluster-connectivity/port-groups/port-group-1:
Name
Value
------------ ----------------------------------------------------------------enabled
all-enabled
member-ports engine-1-1|A2-XG01|192.168.10.140|enabled,
engine-1-1|B2-XG01|192.168.10.142|enabled
option-set
optionset-com-1
120
subnet
121
Default values are determined by the cluster-address attribute of the active subnets from
all remote clusters. For example:
remote-cluster-addresses cluster-2
[192.168.91.252:11000,192.168.101.252:11000]
Use the set command in the target ports context to change the ports IP address:
VPlexcli:/> cd engines/engine-1-1/directors/director-1-1-A/hardware/ports /A2-XG00
VPlexcli:/engines/engine-1-1/directors/director-1-1-A/hardware/ports/A2-XG00> set address
192.168.10.140
122
1. Verify connectivity.
Use the connectivity validate-wan-com command to verify that all directors have complete
connectivity through the other port group.
VPlexcli:/> connectivity validate-wan-com
connectivity: FULL
port-group-1 - OK - All expected connectivity is present.
port-group-0 - OK - All expected connectivity is present.
Use the set command to disable the member ports in the target port-group:
VPlexcli:/clusters/cluster-1/cluster-connectivity/port-groups/port-group-1> set enabled
all-disabled
Use the set command to change the MTU (valid values are 96 - 9000):
The VPLEX CLI accepts MTU values lower than 96, but they are not supported. Entering a
value less than 96 prevents the port-group from operating.
VPlexcli:/clusters/cluster-1/cluster-connectivity/subnets/cluster-1-SN01> set mtu 1480
WARNING: Incompatible MTU settings on clusters. You must also set the MTU in subnet
'cluster-2-SN01' (cluster-2) to 1480. Performance will be negatively impacted by incorrect
settings.
MTU size on the VPLEX and the attached switch must be the same.
Type the ll command to verify the new MTU on VPLEX:
VPlexcli:/clusters/cluster-1/cluster-connectivity/subnets/
cluster-2-SN01> ll
Name
Value
---------------------- ------------------------cluster-address
192.168.2.252
gateway
192.168.2.1
mtu
1480
.
.
123
Verify that the MTU size on VPLEX matches the MTU on attached switch.
Repeat for the second subnet.
Use the set command to enable the member ports in the target port-group:
VPlexcli:/clusters/cluster-1/cluster-connectivity/port-groups/port-group-1> set enabled
all-enabled
5.Validate connectivity
Use the connectivity validate-wan-com command to verify that the directors have
complete connectivity through all port groups.
VPlexcli:/> connectivity validate-wan-com
connectivity: FULL
port-group-1 - OK - All expected connectivity is present.
port-group-0 - OK - All expected connectivity is present.
124
Link-local address
Global address
Site-local address
Unspecified address
Loopback address
Implementing IPv6
In VPLEX, an IP address can either be an IPv4 address and/or an IPv6 address. While
VPLEX continues to support IPv4, it now also provides support for the full IPv6 stack as
well as dual stack IPv4/IPv6, including:
Browser session
VPN connection
Note: In a virtual private network, the end points must always be of the same address
family. That is, each leg in the VPN connection must either be IPv4 or IPv6.
CLI session
Cluster Witness
Recover Point
Implementing IPv6
125
/I
IP
Pv
v4
/I
IPV4
RecoverPoint
v4
Pv
Cluster Witness
IP
VASA Provider
Management
Server
IPv4/IPv6
Management
Server
IPv4
IPV4/
IPv6
IPv4
Director
IPv4/IPv6
Director
Array
Array
VPLX-000551
Table 12describes IPv6 support on VPLEX components along with additional notes.
126
VPLEX Components
Supports
IPv4
Supports
IPv6
Co-existence
Notes
Management server
Yes
Yes
Yes
The management
server supports only
global scope IPv6 static
address configuration.
The management
server supports the
coexistence of both the
IPv4 and IPv6 address.
Director
Yes
No
No
Directors continue to
support IPv4 address.
Cluster Witness
Yes
Yes
Yes
WAN COM
Yes
Yes
No
Supports
IPv4
Supports
IPv6
Co-existence
Notes
VASA Provider
Yes
No
No
Recover Point
Yes
Yes
Yes
RecoverPoint can
communicate with the
management server using
either an IPv4 address or
an IPv6 address.
LDAP/AD server
Yes
Yes
Yes
You can access the VPLEX GUI using the following URL:
https://[mgmtserver_ipv6_addr]
VPLEX CLI
Using an SSH client (Putty), log in by specifying the IPv6 address of the management
server.
Implementing IPv6
127
128
CHAPTER 9
Consistency Groups
This chapter describes the following topics and procedures:
129
136
145
166
Consistency groups aggregate up to 1000 virtual volumes into a single entity that can be
managed as easily as individual volumes.
Consistency group detach rules define on which cluster I/O continues during cluster or
inter-cluster link failures.
If all storage for an application with rollback capabilities is in a single consistency group,
the application can recover from a complete cluster failure or inter-cluster link failure with
little or no data loss. Data loss, if any is determined by the application data access pattern
and the consistency groups cache-mode.
All consistency groups guarantee a crash consistent image of their member
virtual-volumes. In the event of a director, cluster, or inter-cluster link failure, consistency
groups prevent possible data corruption.
Create consistency groups for sets of volumes that require the same I/O behavior during
failures.
There are two types of consistency groups:
Consistency Groups
129
Consistency Groups
Uses write-through caching (known as synchronous cache mode in the VPLEX user
interface).
Write order fidelity is maintained by completing all writes to disk before
acknowledging the write to the host.
Figure 14 shows a synchronous consistency group that spans two clusters in a VPLEX
Metro configuration.
The hosts at both clusters write to the VPLEX distributed volumes in the consistency
group.
This guarantees that the image on the back end storage is an exact copy on both sides.
130
Consistency Groups
Cluster 1
Cluster 2
2
w
5
Virtual Volume
Virtual Volume
Storage
Storage
Local visibility- The local volumes in the consistency group are visible only to local
cluster.
Global visibility- The local volumes in the consistency group have storage at one
cluster, but are visible to both clusters.
Local visibility
Local consistency groups with the Visibilityproperty set to only the local cluster read and
write only to their local cluster.
131
Consistency Groups
Virtual Volume
Global visibility
If the local consistency groups Visibility property is set to both clusters (global
visibility), both clusters can receive I/O from the cluster that does not have a local copy.
All writes from that remote cluster pass over the inter-cluster WAN link before they are
acknowledged.
Any reads that cannot be serviced from local cache are also transferred across the link.
This allows the remote cluster to have instant on-demand access to the consistency group,
but also adds additional latency for the remote cluster.
Local consistency groups with global visibility are supported in VPLEX Metro
environments. Only local volumes can be placed into the local consistency group with
global visibility. Local consistency groups with global visibility always use write-through
cache mode (synchronous cache mode). I/O that goes to local consistency groups with
global visibility will always be synchronous.
Figure 16 shows a local consistency group with global visibility.
132
Consistency Groups
Cluster - 1
Cluster - 2
2
w
5
A
Virtual Volume
Storage
Uses write-back caching (known as asynchronous cache mode in the VPLEX user
interface).
133
Consistency Groups
Write-back caching
In a synchronous cache mode, write order fidelity is maintained by batching I/O between
clusters into packages called deltas that are exchanged between clusters.
Each delta contains a group of writes that were initiated in the same window of time. All
writes in one delta are guaranteed to be newer than all writes in the next delta.
Write order consistency is maintained on delta set boundaries, not on individual writes.
Entire deltas are exchanged between clusters and committed to disks as a logical unit.
Each asynchronous consistency group maintains its own queue of deltas in various states:
open - The delta is accepting new writes.
closed - The delta is not accepting writes. Deltas are closed when their timer
expires or they are full
exchanging - The delta's contents are being synchronized between the clusters.
exchanged - The delta is exchanged with the remote cluster. The delta's contents
are the same at all clusters.
committing- The delta's contents are being written out to the back-end storage.
committed - The write of the deltas contents to the back-end storage is complete.
There can be only one delta in each state except closed.
There can be multiple deltas in the closed delta queue.
Before a delta is exchanged between clusters, data within the delta can vary by cluster.
After a delta is exchanged and committed, data is exactly the same on both clusters.
If access to the back end array is lost while the system is writing a delta, the data on disk
is no longer consistent and requires automatic recovery when access is restored.
Asynchronous cache mode can deliver better performance, but there is a higher risk that
data will be lost if:
There is an inter-cluster link partition and both clusters are actively writing and
instead of waiting for the link to be restored, the user chooses to accept a data
rollback in order to reduce the RTO
134
Consistency Groups
In Figure 17, one cluster is actively reading and writing. This simplifies the view of
asynchronous I/O. Application data is written to the director in Cluster 1 and protected in
another director of Cluster 1.
VPLEX collects writes into a delta of a fixed size. Once that delta is filled or when a set time
period (Default closeout-time) has elapsed, the two clusters of the VPLEX Geo begin a
communication process to exchange deltas.
The combination of the deltas is referred to as a global delta. In , the global delta only
includes the writes that occurred on Cluster 1 because Cluster 2 was inactive. This data is
then written to the back-end storage at Cluster 1 and Cluster 2.
Cluster 2
A
3
Virtual Volume
Storage
Virtual Volume
Storage
Figure 17 shows I/O in an asynchronous consistency group when both clusters are
actively reading and writing. The applications at Cluster 1 and Cluster 2 are both writing to
their local VPLEX cluster.
Application writes are acknowledged once each cluster caches the data in two directors.
The VPLEX collects the data in deltas at each cluster. After the Default closeout-time, or
after deltas become full, the clusters exchange deltas. VPLEX then writes the resulting
delta to back end storage at each cluster.
This process coordinates data written to the storage at each cluster.
About VPLEX consistency groups
135
Consistency Groups
Cache mode
Visibility
Storage-at-clusters
Local-read-override
Detach-rule
Auto-resume-at-loser
Virtual-volumes
Recoverpoint-enabled
Active cluster
Default closeout-time
Maximum-queue-depth
IMPORTANT
When RecoverPoint is deployed, it may take up to 2 minutes for the RecoverPoint cluster to
take note of changes to a VPLEX consistency group. Wait for 2 minutes after making the
following changes to a VPLEX consistency group before creating or changing a
RecoverPoint consistency group:
- Add/remove virtual volumes to/from a VPLEX consistency group
- Enable/disable the recoverpoint-enabled property of a VPLEX consistency group
- Change the detach rule of a VPLEX consistency group
136
Consistency Groups
Cache mode
Cache mode describes how data is written to storage. Cache mode can be either
synchronous or asynchronous:
Synchronous cache mode - Supported on VPLEX Local and VPLEX Metro configurations
where clusters are separated by up to 5 ms of latency. In synchronous cache mode,
writes to the back-end storage volumes are not acknowledged to the host until the
back-end storage volumes acknowledge the write.
Writes to the virtual volumes in a synchronous consistency group are written to disk
only if all prior acknowledged writes to all volumes in the consistency group are also
present on the disk.
Changing cache mode for a non-empty consistency group that is receiving host I/O
requests may temporarily worsen I/O performance.
Visibility
Visibility controls which clusters know about a consistency group.
Note: Visibility for consistency groups differs from the visibility property for devices.
Devices can have visibility set to local (visible only to the local cluster) or global (visible to
both clusters). All distributed devices have global visibility.
By default, a consistency groupss visibility property is set only to the cluster where the
consistency group was created. If a consistency group is created on cluster-2, it is initially
visible only on cluster-2.
The visibility of the volumes within the consistency group must match the visibility of the
consistency group.
137
Consistency Groups
If the visibility of a volume in a consistency group is set to local, the visibility of the
consistency group cannot be set to include other clusters. For example, if volume
LocalVolume with visibility property set to local is added to consistency group
TestCG the visibility of TestCG cannot be modified to include other clusters.
In general, visibility is set to one of three options:
Configure the consistency group to contain only volumes local to the local cluster.
Configure the consistency group to contain only volumes that have storage at one
cluster, but have global visibility.
Configure the consistency group to contain only volumes that are distributed with legs
at both clusters.
When a consistency groups visibility is set to a cluster, the consistency group appears
below /clusters/cluster-n/consistency-groups context for the cluster.
Note: The context for a specified consistency group appears in a clusters consistency
group CLI context only if the Visibility property of the consistency group includes that
cluster.
Under normal operations, the visibility property can be modified to expand from one
cluster to both clusters.
Use the set command in /clusters/cluster/consistency-groups/consistency-group context
to modify the visibility property. If consistency group TestCG is visible only at cluster-1,
use the set command to make it visible to cluster-1 and cluster-2:
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG> set visibility cluster-1,cluster-2
If a consistency group contains virtual volumes with a given visibility (for example, a
member volumes visibility is local), the visibility property for the consistency group
cannot be changed to conflict with the visibility property of the member virtual volume.
For example, consistency group "TestCG" is visible only at cluster-1, and contains a volume
"V" whose device is at cluster-1 and has local visibility. Both the following commands will
fail, since the volume V is not visible at cluster-2.
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG> set visibility cluster-1,cluster-2
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG> set visibility cluster-2
Storage-at-clusters
Storage-at-clusters tells VPLEX at which cluster the physical storage associated with a
consistency group is located.
The storage-at-clusters property of a consistency group must be a non-empty subset of the
consistency groups visibility property.
If visibility is set to one cluster, then storage-at-clusters must be exactly the same as
visibility.
If visibility is set to two clusters (1 and 2), then storage-at-clusters can be one of:
cluster-1
cluster-2
138
Consistency Groups
Note: Best practice is to set the storage-at-clusters property when the consistency group is
empty.
Local-read-override
The local-read-override property determines whether the volumes in this consistency
group use the local read override optimization.
When a director receives a read request, it first checks the distributed cache to see if that
page is dirty (written to some director's cache, but not yet written to disk). If the page is
dirty in any director's cache, the page is sent from that director to the reading director. The
two directors can at be at the same cluster or in different clusters.
When a read request for a page is received by a director, and none of the directors in the
same cluster have that page in cache, it has two ways to get the page:
Query the other directors in other clusters whether they have the page in their caches.
If no peer director has the page in its cache, the page is read from the underlying back-end
storage. The local-read-override property can be set to:
true (default)- A director reading from a volume in this consistency group prefers to read
from back-end storage over getting the data from a peer director's cache.
false - A director reading from a volume in the consistency group prefers to read from a
peer directors cache.
Local-read-override should be set to true if the inter-cluster latency is large or back-end
storage is fast and has a large cache of its own that enables it to respond faster than the
VPLEX director.
139
Consistency Groups
Local-read-override should be set to false only if it is faster to retrieve pages from the
remote cluster's cache than from the local clusters storage. For example, if the clusters
are located close to one another and the storage on the local cluster is very slow.
Use the set command in
/clusters/cluster/consistency-groups/consistency-group/advanced context to modify the
local-read-override property. For example, to disable local-read-override:
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG/advanced> set local-read-override false
Detach-rule
Detach rules are a consistency groups policy for automatically picking a winning cluster
when there is an inter-cluster link outage.
There are three consistency group detach rules:
If a consistency group has a detach-rule configured, the rule applies to all volumes in the
consistency group, and overrides any rule-sets applied to individual volumes.
This property is not relevant for local consistency groups.
By default, no specific detach rule is configured for a consistency group. Instead, the
no-automatic-winner detach rule is set as default for a consistency group with visibility to
both clusters.
Best practice is to apply detach rules to a consistency group that meet the needs of your
application in terms of I/O continuance and data loss tolerance.
Use the consistency-group set-detach-rule commands to configure the detach-rule- for a
consistency group:
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG>
set-detach-rule no-automatic-winner
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG>
set-detach-rule active-cluster-wins
140
Consistency Groups
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG>
set-detach-rule winner --cluster cluster-1 --delay 5s
Auto-resume-at-loser
Determines whether the loser automatically resumes I/O when the inter-cluster link is
repaired after a failure.
When the link is restored, the losing cluster finds out that it's the loser, and that the data
on the winning cluster is now different. The loser must determine whether to suddenly
change to the winner's data, or to keep suspending I/O.
By default, this property is set to false (auto-resume is disabled).
Usually, this property is set to false to give the administrator time to halt and restart the
application. Otherwise, dirty data in the hosts cache may be inconsistent with the image
on disk to which the winning cluster has been writing. If the host flushes dirty pages out of
sequence, the data image may be corrupted.
Set this property to true for consistency groups used in a cluster cross-connect. In this
case, there is no risk of data loss since the winner is always connected to the host,
avoiding out of sequence delivery.
true - I/O automatically resumes on the losing cluster after the inter-cluster link has been
restored.
Set auto-resume-at-loser to true only when the losing cluster is servicing a read-only
application such as servicing web pages.
false (default) - I/O remains suspended on the losing cluster after the inter-cluster link has
been restored. I/O must be manually resumed.
Set auto-resume-at-loser to false for all applications that cannot tolerate a sudden change
in data.
Setting this property to true may cause a spontaneous change of the data view presented
to applications at the losing cluster when the inter-cluster link is restored. If the
application has not failed, it may not be able to tolerate the sudden change in the data
view and this can cause data corruption. Set the property to false except for applications
that can tolerate this issue or cross connected hosts.
Use the set command in the advanced context to configure the auto-resume property for
a consistency group:
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG/advanced> set auto-resume-at-loser true
Virtual-volumes
Administrators can add and remove virtual volumes to a consistency group. In order to be
added to a consistency group, a virtual volume:
141
Consistency Groups
Any properties (such as detach rules or auto-resume) that conflict with those of the
consistency group or are automatically changed to match those of the consistency
group
Note: Virtual volumes with different properties are allowed to join a consistency group, but
inherit the properties of the consistency group.
Use the consistency-group list-eligible-virtual-volumes command to display virtual
volumes that are eligible to be added to a consistency group.
Use the consistency-group add-virtual-volumes command to add one or more virtual
volumes to a consistency group.
Use the ll /clusters/cluster/consistency-groups/ consistency-group command to display
the virtual volumes in the specified consistency group.
Use the consistency-group remove-virtual-volumes command to remove one or more
virtual volumes from a consistency group.
Recoverpoint-enabled
Starting in Release 5.1, VPLEX includes a RecoverPoint splitter. The splitter splits
application writes so that the writes are sent to their normally designated storage volumes
and a RecoverPoint Appliance (RPA) simultaneously.
To configure a consistency group for use with RecoverPoint:
Consistency groups with the Visibility property set to both clusters must also have
their Storage-at-clusters set to both clusters in order to set the recoverpoint-enable
property to true.
All production and replica volumes associated with RecoverPoint must be in a VPLEX
consistency group with the recoverpoint-enabled property set to true.
Configure two consistency groups for each set of virtual volumes to be protected by
RecoverPoint: one for the production volumes and one for the replica volumes.
RecoverPoint journal volumes are not required to be in a consistency group with this
attribute enabled.
In addition to setting this property to true, the VPLEX consistency group must be aligned
with the RecoverPoint consistency group.
Use the set command in /clusters/cluster/consistency-groups/consistency-group/
context to configure the recoverpoint-enabled property:
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG> set recoverpoint-enabled true
142
Consistency Groups
Active cluster
The active cluster property is not configurable by users. Instead, a configuration changes
dynamically between active/passive and active/active depending on the write activity at
each cluster.
Open - Host writes to distributed volumes are acknowledged back to the host after the
data is protected in the cache of another director in the local cluster. Writes from each
application are collected into deltas.
Closed - Deltas are closed when their timer expires or they are full.
Data that has been protected to another director in the local cluster (but not exchanged
with the remote cluster) is known as dirty data.
A configuration is active/passive if hosts at only one cluster were writing at the time of the
failure. A configuration is active/active if hosts at both clusters were writing at the time of
the failure.
A cluster is marked passive when all dirty data contributed by the cluster has been
committed (written to the back end at both clusters). The cluster remains passive as long
as no further data is contributed by the time the next delta closes. Specifically:
The currently open delta closes, either due to a timer (default-closeout-time) or due to
the delta becoming full at either cluster.
The next open (empty at one cluster because the host is not writing) delta is closed.
When closure of the next delta completes without any new data, that cluster is marked
passive.
When a host writes to a passive cluster (creates dirty data), the cluster is marked as
active.
Default closeout-time
Sets the default for the maximum time a delta remains open to accept new writes.
Closeout-time can be set as either a positive integer or zero.
Default: 30 seconds.
zero (0) - There is no time limit on closing the open delta. The delta is closed when it is full
and cannot accept more data.
143
Consistency Groups
A larger value for the default close-out -time unnecessarily exposes dirty data in the open
delta if the exchange delta is idle.
The ideal default close-out -time is equal to the time it takes to exchange a full delta set.
The default closeout-time and maximum-queue-depth properties work together to allow
administrators to fine-tune the maximum possible data loss in the event of an inter-cluster
link outage.
Maximum RPO for the consistency group can be calculated as follows:
maximum-queue-depth x default closeout-time = RPO
Increasing either the default closeout-time or maximum-queue-depth property increases
the maximum RPO in the event of a inter-cluster link outage or cluster failure.
Use the set command in
/clusters/cluster/consistency-groups/consistency-group/advanced context to configure
the closeout-time property for an asynchronous consistency group:
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG/advanced> set closeout-time 12
Maximum-queue-depth
For an asynchronous consistency group, this property configures the maximum possible
depth of the delta processing queues.
Each consistency group maintains its own queue of deltas in various states:
open - The delta is accepting new writes
closed - The delta is not accepting writes.
exchanging - The delta's contents are being synchronized between the clusters.
exchanged - The delta's contents are the same at all clusters.
committing- The delta's contents are being written out to the back-end storage.
committed - The write of the deltas contents to the back-end storage is complete.
All deltas pass through these states, forming a pipeline in the order in which they were
received. For applications that generate occasional brief bursts of writes, the rate of
incoming writes may (for a short time) be faster than the rate at which deltas are
exchanged and committed.
Setting the maximum queue depth allows multiple closed deltas to wait to be exchanged
and committed.
The default closeout-time and maximum-queue-depth properties work together to allow
administrators to fine-tune the maximum possible data loss in the event of an inter-cluster
link outage.
Decrease the maximum-queue-depth to decrease the maximum RPO in the event of a
inter-cluster link outage or cluster failure.
Increase the maximum-queue-depth to increase the ability of the system to handle bursts
of traffic.
144
Consistency Groups
Increasing the maximum-queue-depth also increases the data that must rollback in the
case of a cluster failure or a link outage (I/O is resumed when the active cluster became
the loser).
Default: 6.
Range: 6 - 64. The maximum value is platform-specific.
Use the set command in
/clusters/cluster/consistency-groups/consistency-group/advanced context to configure
the maximum-queue-depth property for an asynchronous consistency group:
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG/advanced>set maximum-queue-depth 8
Are the clusters more than 100km from one another or is latency greater than 5 ms?
Will the consistency group contain volumes on distributed devices? If yes, the cache
mode must be set to asynchronous.
At which cluster(s) is the underlying storage of the virtual volumes located? If volumes
are located at both clusters, set the storage-at-cluster property as
cluster-1,cluster-2.
Will I/O to the volumes in the consistency group be split by a RecoverPoint Appliance?
If yes, set the recoverpoint-enable property to true.
Some properties of virtual volumes and consistency groups limit which volumes can be
added to a consistency group, or prevent a property of the consistency group from being
modified.
For example, a consistency groups visibility property is set to cluster-1. Virtual volumes
local to cluster-1 are added. The visibility property of the consistency group can not be
changed to either cluster-2 or cluster-1,cluster-2 since the volumes are not visible at
cluster-2.
145
Consistency Groups
test11
test6
vs_sun190
test12
test7
test13
test8
test14
test9
/clusters/cluster-2/consistency-groups:
TestCG
local_test
test10
test15
test16
test5
vs_RAM_c1wins vs_RAM_c2wins vs_oban005
test11
test6
vs_sun190
test12
test7
test13
test8
test14
test9
Contexts:
Name
------------
146
Value
-------------------------------------------[]
synchronous
[(cluster-1,{ summary:: ok, details:: [] })]
[]
false
[]
[]
[cluster-1]
Description
-----------
Consistency Groups
advanced
recoverpoint
The CLI context of the consistency group appears only at the cluster where the
consistency group has visibility. If visibility is set from cluster-1 to include only cluster-2,
the CLI context for the consistency group disappears at cluster-1 and is visible only from
cluster-2.
To set the consistency groups visibility property to both clusters:
VPlexcli:/clusters/cluster-1/consistency-groups> set TestCG::visibility cluster-1,cluster-2
147
Consistency Groups
Note: When you set the cache-mode to asynchronous, the CLI automatically applies the
active-cluster-wins rule.
When you set the cache mode to synchronous, the CLI automatically applies a winner rule
where the winner is the cluster with the lowest cluster ID (typically cluster-1) and the
time-out is five seconds.
Value
---------------------------------------------------------[]
asynchronous
active-cluster-wins
[(cluster-1,{ summary:: ok, details:: [] }), (cluster-2,{
summary:: ok, details:: [] })]
passive-clusters
[cluster-1, cluster-2]
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes
[]
visibility
[cluster-1, cluster-2]
Contexts:
Name
-----------advanced
recoverpoint
Description
-----------
Refer to Table 14, Display consistency group field descriptions, for descriptions of the
fields in the display.
148
Consistency Groups
Only volumes with storage at both clusters (distributed volumes) can be added to
asynchronous consistency groups.
Only local volumes can be added to synchronous consistency groups with visibility and
storage-at-cluster set to the local cluster.
Remote volumes can be added to synchronous consistency groups with visibility set to
both clusters and storage-at-cluster set to one cluster.
To add virtual volumes to an existing consistency group, do the following:
1. Navigate to the target consistency groups context:
VPlexcli:/> cd clusters/cluster-1/consistency-groups/ TestCG
Note: The full path is not required if the volume name is unique in the VPLEX.
To add multiple volumes using a single command, separate virtual volumes by
commas:
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG> add-virtual-volumes
TestDDevice-1_vol,TestDDevice-2_vol
Value
---------------------------------------------------------[]
asynchronous
active-cluster-wins
[(cluster-1,{ summary:: ok, details:: [] }), (cluster-2,{
summary:: ok, details:: [] })]
passive-clusters
[cluster-1, cluster-2]
recoverpoint-enabled false
Manage consistency groups
149
Consistency Groups
storage-at-clusters
virtual-volumes
visibility
Contexts:
Name
-----------advanced
recoverpoint
[cluster-1, cluster-2]
[TestDDevice-1_vol, TestDDevice-2_vol]
[cluster-1, cluster-2]
Description
-----------
Value
---------------------------------------------------------[]
asynchronous
active-cluster-wins
[(cluster-1,{ summary:: ok, details:: [] }), (cluster-2,{
summary:: ok, details:: [] })]
passive-clusters
[cluster-1, cluster-2]
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes
[TestDDevice-1_vol, TestDDevice-2_vol,
TestDDevice-3_vol,TestDDevice-4_vol,
TestDDevice-5_vol]
visibility
[cluster-1, cluster-2]
Contexts:
Name
-----------advanced
recoverpoint
Description
-----------
Before using consistency-group remove-virtual-volumes command on an asynchronous
consistency group, ensure that no host applications are using the volumes you are
removing.
The syntax for the command is:
consistency-group remove-virtual-volumes
[-v|--virtual-volumes] virtual-volume,virtual-volume,...
[-g|--consistency-group context path
150
Consistency Groups
To remove multiple virtual volumes with a single command, separate the volumes using
commas:
VPlexcli:/> consistency-group remove-virtual-volumes
/clusters/cluster-1/virtual-volumes/TestDDevice-2_vol,
/clusters/cluster-1/virtual-volumes/TestDDevice-3_vol --consistency-group
/clusters/cluster-1/consistency-groups/TestCG
Remove two virtual volumes from the target consistency group context:
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG> remove-virtual-volumes
TestDDevice-2_vol, TestDDevice-3_vol
Value
---------------------------------------------------------[]
asynchronous
active-cluster-wins
[(cluster-1,{ summary:: ok, details:: [] }), (cluster-2,{
summary:: ok, details:: [] })]
passive-clusters
[cluster-1, cluster-2]
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes
[TestDDevice-1_vol, TestDDevice-4_vol, TestDDevice-5_vol]
visibility
[cluster-1, cluster-2]
Contexts:
Name
-----------advanced
recoverpoint
Description
-----------
151
Consistency Groups
Use the set command to modify the following properties of a consistency group:
All consistency groups:
Cache mode
Visibility
Storage-at-clusters
Local-read-override
Auto-resume-at-loser
Default closeout-time
Maximum-queue-depth
Recoverpoint-enabled
[-d|--default] - Sets the specified attribute(s) to the default value(s), if any exist. If no
attributes are specified, displays the default values for attributes in the current/specified
given context.
[-f|--force] - Force the value to be set, bypassing any confirmations or guards.
[-a|--attributes] selector pattern - Attribute selector pattern.
[-v|--value] new value - The new value to assign to the specified attribute(s).
To display which attributes are modifiable (writable) using the set command and their
valid inputs:
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG> set
attribute
input-description
-----------------------------------------------------------------------------------------------------------------active-clusters
Read-only.
cache-mode
Takes one of 'asynchronous', 'synchronous' (case sensitive).
detach-rule
Read-only.
name
Takes a unique, non-empty and non-null name. A valid name starts with a
letter or '_'
and contains only letters, numbers, '-' and '_'.
operational-status
Read-only.
passive-clusters
Read-only.
recoverpoint-enabled false
152
Consistency Groups
storage-at-clusters Takes a list with each element being a 'cluster' context or a context
pattern.
virtual-volumes
Read-only.
visibility
Takes a list with each element being a 'cluster' context or a context pattern.
storage-at-clusters
cache-mode
cluster-1
cluster-1
synchronous
N/A
cluster-1 and
cluster-2
cluster-1 and
cluster-2
synchronous
no-automatic-winner
winner cluster-1
winner cluster-2
cluster-1 and
cluster-2
cluster-1
synchronous
no-automatic-winner
winner cluster-1
cluster-1 and
cluster-2
cluster-1 and
cluster-2
asynchronou
s
no-automatic-winner
active-cluster-wins
153
Consistency Groups
To apply a detach rule that will determine the behavior of all volumes in a consistency
group:
1. Use the ll command to display the current detach rule (if any) applied to the
consistency group:
VPlexcli:/clusters/cluster-1/consistency-groups/TestCG2> ll
Attributes:
Name
Value
-------------------- ---------------------active-clusters
[]
cache-mode
synchronous
detach-rule
.
.
.
Use the consistency-group set-detach-rule winner command to specify which cluster is the
winner, and the number of seconds VPLEX waits after a link outage before detaching the
winning cluster.
In the following example, the command is used in the root context:
VPlexcli:/> consistency-group set-detach-rule winner --cluster cluster-1 --delay 5s
--consistency-groups TestCG
Consistency Groups
Consistency groups with the visibility property set to both clusters must also have
their storage-at-clusters set to both clusters
Contexts:
advanced recoverpoint
155
Consistency Groups
Value
---------------------[]
synchronous
[ok]
[]
false
[cluster-1, cluster-2]
[]
[cluster-1, cluster-2]
2. Use the consistency-group destroy command to delete the consistency group. The
syntax for the command is:
consistency-group destroy
[-g|--consistency-group] consistency-group, consistency-group,...
Consistency Groups
/clusters/cluster-1/consistency-groups:
TestCG
local_test
test10
test15
test16
test5
vs_RAM_c1wins vs_RAM_c2wins vs_oban005
test11
test6
vs_sun190
test12
test7
test13
test8
test14
test9
/clusters/cluster-2/consistency-groups:
TestCG
local_test
test10
test15
test16
test5
vs_RAM_c1wins vs_RAM_c2wins vs_oban005
test11
test6
vs_sun190
test12
test7
test13
test8
test14
test9
test14
test15
test16
test5
test6
test7
157
Consistency Groups
In the following example, the command displays the operational status of a consistency
group on a healthy VPLEX:
VPlexcli:/> ls /clusters/cluster-1/consistency-groups/cg1
/clusters/cluster-1/consistency-groups/cg1:
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
Attributes:
Name
------------------active-clusters
cache-mode
detach-rule
operational-status
Value
---------------------------------------------------------[cluster-1, cluster-2]
asynchronous
no-automatic-winner
[(cluster-1,{ summary:: ok, details:: [] }),
(cluster-2,{ summary:: ok, details:: [] })]
passive-clusters
[]
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes
[dd1_vol, dd2_vol]
visibility
[cluster-1, cluster-2]
Contexts:
Name
-----------advanced
recoverpoint
Description
-----------
158
Consistency Groups
vplex-rp-cg-alignment-indications
consistency
Source)]
['device_storage_vol_1_7_vol' is not in a RecoverPoint
vplex-rp-cg-alignment-status
error
Status details contain cluster-departure, indicating that the clusters can no longer
communicate with one another.
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
Attributes:
Name
------------------active-clusters
cache-mode
detach-rule
operational-status
Value
---------------------------------------------------------[cluster-1, cluster-2]
asynchronous
no-automatic-winner
[(cluster-1,{ summary:: suspended, details:: [cluster-departure] }),
(cluster-2,{ summary:: suspended, details:: [cluster-departure] })]
passive-clusters
[]
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes
[dd1_vol, dd2_vol]
visibility
[cluster-1, cluster-2]
Contexts:
advanced recoverpoint
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
Attributes:
Name
------------------active-clusters
cache-mode
detach-rule
operational-status
Value
---------------------------------------------------------[cluster-1, cluster-2]
asynchronous
no-automatic-winner
[(cluster-1,{ summary:: ok, details:: [] }),
(cluster-2,{ summary:: suspended, details:: [requires-resume-at-loser] })]
passive-clusters
[]
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes
[dd1_vol, dd2_vol]
Manage consistency groups
159
Consistency Groups
visibility
[cluster-1, cluster-2]
Contexts:
advanced recoverpoint
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> resume-at-loser -c cluster-2
This may change the view of data presented to applications at cluster cluster-2. You should
first stop applications at that cluster. Continue? (Yes/No) Yes
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
Attributes:
Name
------------------active-clusters
cache-mode
detach-rule
operational-status
Value
---------------------------------------------------------[cluster-1, cluster-2]
asynchronous
no-automatic-winner
[(cluster-1,{ summary:: ok, details:: [] }),
(cluster-2,{ summary:: ok, details:: [] })]
passive-clusters
[]
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes
[dd1_vol, dd2_vol]
visibility
[cluster-1, cluster-2]
Contexts:
advanced recoverpoint
Table 14 Display consistency group field descriptions (page 1 of 7)
Property
Description
Standard properties
160
cache mode
detach-rule
Policy for automatically picking a winning cluster when there is an inter-cluster link
outage. A winning cluster is intended to resume I/O operations when the link fails.
For a synchronous consistency group the winning cluster will resume I/O operations
shortly after the link fails.
For an asynchronous consistency group, the winning cluster prepares to resume I/O
operations, but may need to roll back the view of data in the process. If such a roll-back
is required, I/O will remain suspended until the administrator manually intervenes with
the resume-after-rollback command.
Modifiable using the following commands:
consistency-group set-detach-rule active-cluster-wins - If one cluster was active and
one was passive at the time of the link outage, the group selects the active cluster as
the winner.
consistency-group set-detach-rule no-automatic-winner - The consistency group will
not select a winning cluster.
consistency-group set-detach-rule winner - The cluster specified by cluster-name will
be declared the winner if an inter-cluster link outage lasts more than the number of
seconds specified by delay.
Consistency Groups
Description
recoverpoint-enabled
storage-at-clusters
At which cluster the physical storage associated with a consistency group is located.
Modifiable using the set command. If cluster names are cluster-1 and cluster-2
valid values are:
cluster-1 - Storage associated with this consistency group is located only at cluster-1.
cluster-2 - Storage associated with this consistency group is located only at cluster-2.
cluster-1,cluster-2 - Storage associated with this consistency group is located at both
cluster-1 and cluster-2.
When modified, the new value cannot be incompatible with the volumes that are
already in the consistency group. Change storage-at-clusters only when the
consistency group has no member volumes.
visibility
virtual-volume
Lists the virtual volumes that are members of the consistency group.
Modifiable using the following commands:
consistency-group add-virtual-volumes - Add one or more virtual volumes to a
consistency group.
consistency-group remove-virtual-volumes - Remove one or more virtual volumes from
a consistency group.
161
Consistency Groups
Description
Advanced properties
auto-resume-at-loser
Determines whether IO automatically resumes at the detached cluster for the volumes
in a consistency group when the cluster regains connectivity with its peer cluster.
Relevant only for multi-cluster consistency groups that contain distributed volumes.
Modifiable using the set command. Set this property to true to allow the volumes to
resume I/O without user intervention (using the resume-at-loser command).
true - I/O automatically resumes on the losing cluster after the inter-cluster link has
been restored.
false (default) - I/O must be resumed manually after the inter-cluster link has been
restored.
Note: Leave this property set to false to give administrators time to restart the
application. Otherwise, dirty data in the hosts cache is not consistent with the image
on disk to which the winning cluster has been actively writing. Setting this property to
true can cause a spontaneous change of the view of data presented to applications at
the losing cluster. Most applications cannot tolerate this data change. If the host
flushes those dirty pages out of sequence, the data image may be corrupted.
default closeout-time
162
Consistency Groups
Description
local-read-override
maximum-queue-depth
163
Consistency Groups
Description
Recoverpoint properties
recoverpoint-information
vplex-rp-cg-alignmentindications
vplex-rp-cg-alignment
status
Display-only properties
164
active-clusters
List of clusters which have recently written to a member volume in the consistency
group.
Applicable only to asynchronous consistency groups.
For synchronous consistency groups, this property is always empty ([ ]).
current-rollback-data
current-queue-depth
delta-size
max-possible-rollback
-data
An estimate of the maximum amount data that can be lost if at a single cluster within a
volume set if there are multiple director failures within the cluster, a total cluster
failure, or a total system failure. Roughly the product of current-queue-depth,
delta-size, and the largest number of directors at any one cluster.
operational status
Current status for this consistency group with respect to each cluster on which it is
visible.
ok - I/O can be serviced on the volumes in the consistency group.
suspended - I/O is suspended for the volumes in the consistency group. The reasons
are described in the operational status: details.
degraded - I/O is continuing, but there are other problems described in operational
status: details:
unknown - The status is unknown, likely because of lost management connectivity.
Consistency Groups
Description
passive-clusters
165
Consistency Groups
Description
potential-winner
virtual-volumes
List of the virtual volumes that are members of this consistency group.
write-pacing
When the inter-cluster link is restored, the clusters learn that I/O has proceeded
independently. I/O continues at both clusters until the administrator picks a winning
cluster whose data image will be used as the source to resynchronize the data images.
In the following example, I/O has resumed at both clusters during an inter-cluster link
outage. When the inter-cluster link is restored, the two clusters will come back into contact
and learn that they have each detached the other and carried on I/O.
1. Use the ls command to display the consistency groups operational status at both
clusters.
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
Attributes:
Name
Value
-------------------- ----------------------------------------active-clusters
[cluster-1, cluster-2]
cache-mode
asynchronous
detach-rule
no-automatic-winner
operational-status
[(cluster-1,{ summary:: ok, details::
[requires-resolve-conflicting-detach] }),
(cluster-2,{ summary:: ok, details::
[requires-resolve-conflicting-detach] })]
passive-clusters
[]
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes
[dd1_vol, dd2_vol]
visibility
[cluster-1, cluster-2]
Contexts:
advanced recoverpoint
166
Consistency Groups
[-c|--cluster] cluster - * The cluster whose data image will be used as the source to
resynchronize the data images on both clusters.
[-g|--consistency-group] consistency-group - * The consistency group on which to
resolve the conflicting detach.
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> resolve-conflicting-detach -c cluster-1
This will cause I/O to suspend at clusters in conflict with cluster cluster-1, allowing you to
stop applications at those clusters. Continue? (Yes/No) Yes
Cluster-2s modifications to data on volumes in the consistency group since the link
outage started are discarded.
Cluster-2's data image is then synchronized with the image at cluster-1.
I/O will suspend at cluster-2 if the auto-resume policy is false.
3. Use the ls command to verify the change in operation status:
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
Attributes:
Name
------------------active-clusters
cache-mode
detach-rule
operational-status
Value
---------------------------------------------------------[cluster-1, cluster-2]
asynchronous
no-automatic-winner
[(cluster-1,{ summary:: ok, details:: [] }),
(cluster-2,{ summary:: suspended, details:: [requires-resume-at-loser] })]
passive-clusters
[]
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes
[dd1_vol, dd2_vol]
visibility
[cluster-1, cluster-2]
Contexts:
advanced recoverpoint
167
Consistency Groups
If the losing cluster is active at the onset of a cluster or inter-cluster-link outage, the
distributed cache at the losing cluster contains dirty data.
Without that data, the winning cluster's data image is inconsistent. Resuming I/O at the
winner requires rolling back the winner's data image to the last point where the clusters
agreed.
This can cause a sudden change in the data image.
Many applications cannot tolerate sudden data changes, so the roll-back and resumption
of I/O requires manual intervention.
The delay gives the administrator the chance to halt applications before changing the data
image. The data image is rolled back as soon as a winner is chosen (either manually or
automatically using a detach rule).
The resume-after-rollback command acknowledges that the application is ready for
recovery (this may involve application failure and/or rebooting the host).
Note: It is recommended to reboot the hosts of affected applications.
1. Use the ls command to display the consistency group on the winning cluster during an
inter-cluster link outage.
Because the consistency group is asynchronous, I/O remains suspended.
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
Attributes:
Name
-------------------active-clusters
cache-mode
detach-rule
operational-status
passive-clusters
recoverpoint-enabled
storage-at-clusters
virtual-volumes
visibility
Value
------------------------------------------[]
asynchronous
[suspended, requires-resume-after-rollback]
[cluster-1, cluster-2]
false
[cluster-1, cluster-2]
[dd1_vol]
[cluster-1, cluster-2
Contexts:
advanced recoverpoint
Value
---------------------[cluster-1]
asynchronous
Consistency Groups
detach-rule
operational-status
passive-clusters
recoverpoint-enabled
storage-at-clusters
virtual-volumes
visibility
[ok]
[cluster-2]
false
[cluster-1, cluster-2]
[dd1_vol]
[cluster-1, cluster-2]
Contexts:
advanced recoverpoint
Resynchronize the data image on the losing cluster with the data image on the winning
cluster,
The administrator may then safely restart the applications at the losing cluster.
To restart I/O on the losing cluster:
1. Use the ls command to display the operational status of the target consistency group.
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> ls
Attributes:
Name
------------------active-clusters
cache-mode
detach-rule
operational-status
Value
---------------------------------------------------------[cluster-1, cluster-2]
asynchronous
no-automatic-winner
[(cluster-1,{ summary:: ok, details:: [] }),
(cluster-2,{ summary:: suspended, details:: [requires-resume-at-loser] })]
passive-clusters
[]
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes
[dd1_vol, dd2_vol]
visibility
[cluster-1, cluster-2]
Contexts:
advanced recoverpoint
2. Use the consistency-group resume-at-loser to restart I/O on the losing cluster. The
syntax for the command is:
169
Consistency Groups
consistency-group resume-at-loser
[-c|--cluster] cluster
[-s|--consistency-group] consistency-group
[-f|--force]
[-c|--cluster] cluster - The cluster on which to roll back and resume I/O.
[-g|--consistency-group] consistency-group - The consistency group on which to
resynchronize and resume I/O.
VPlexcli:/clusters/cluster-1/consistency-groups/cg1> resume-at-loser -c cluster-2
This may change the view of data presented to applications at cluster cluster-2. You should
first stop applications at that cluster. Continue? (Yes/No) Yes
Value
---------------------------------------------------------[cluster-1, cluster-2]
asynchronous
no-automatic-winner
[(cluster-1,{ summary:: ok, details:: [] }),
(cluster-2,{ summary:: ok, details:: [] })]
passive-clusters
[]
recoverpoint-enabled false
storage-at-clusters [cluster-1, cluster-2]
virtual-volumes
[dd1_vol, dd2_vol]
visibility
[cluster-1, cluster-2]
Contexts:
advanced recoverpoint
170
CHAPTER 10
VPLEX Witness
This chapter describes VPLEX Witness, including:
Introduction ..........................................................................................................
Failures in Metro systems......................................................................................
Failures in Geo systems.........................................................................................
Install, enable, and manage VPLEX Witness...........................................................
VPLEX Witness operation ......................................................................................
171
173
177
181
183
Introduction
Starting in GeoSynchrony 5.0, VPLEX Witness helps multi-cluster VPLEX configurations
automate the response to cluster failures and inter-cluster link outages.
VPLEX Witness is an optional component installed as a VM on a customer host. The
customer host must be deployed in a separate failure domain from either VPLEX cluster to
eliminate the possibility of a single fault affecting both a cluster and VPLEX Witness.
VPLEX Witness connects to both VPLEX clusters over the management IP network:
VPLEX Witness observes the state of the clusters, and thus can distinguish between a
outage of the inter-cluster link and a cluster failure. VPLEX Witness uses this information
to guide the clusters to either resume or suspend I/O.
Note: VPLEX Witness works in conjunction with consistency groups (see Chapter 9,
Consistency Groups). VPLEX Witness guidance does not apply to local volumes and
distributed volumes that are not members of a consistency group.
VPLEX Witness
171
VPLEX Witness
VPLEX Witness capabilities vary depending on whether the VPLEX is a Metro (synchronous
consistency groups) or Geo (asynchronous consistency groups).
Related Information
In Metro systems, VPLEX Witness provides seamless zero RTO fail-over for storage
volumes in synchronous consistency groups. See Failures in Metro Systems: With
VPLEX Witness on page 175.
In Geo systems, VPLEX Witness does not automate fail-over for asynchronous
consistency groups and can only be used for diagnostic purposes.
Reference
Consistency groups
Chapter 9
cluster-1-detaches
Cluster-1 is the preferred cluster. During a cluster failure or inter-cluster link outage,
cluster-1 continues I/O and cluster-2 suspends I/O.
cluster-2-detaches
Cluster-2 is the preferred cluster. During a cluster failure or inter-cluster link outage,
cluster-2 continues I/O and cluster-1 suspends I/O.
VPLEX Witness
active-cluster-wins
This detach rule applies only to asynchronous consistency groups (VPLEX Geo
configurations).
If one cluster was active and one was passive at the time of the failure, the active
cluster is preferred. See When a cluster is active vs. passive on page 143.
The consistency groups Auto-resume-at-loser property must be set to true for
the active-cluster-wins rule to work.
winner cluster-name delay seconds
This detach rule applies only to synchronous consistency groups (VPLEX Metro
configurations).
The cluster specified by cluster-name is declared the preferred cluster if a failure
lasts more than the number of seconds specified by seconds.
no-automatic-winner
Consistency groups with this detach rules are not guided by VPLEX Witness.
The consistency group does not select the preferred cluster. The detach rules of the
member devices determine the preferred cluster for that device.
Manual intervention may be required to restart I/O on the suspended cluster.
no-automatic-winner
VPLEX Witness does not guide consistency groups with the no-automatic-winner detach
rule. The remainder of this discussion applies only to synchronous consistency groups
with the winner cluster-name delay seconds detach rule.
Synchronous consistency groups use write-through caching. Host writes to a distributed
volume are acknowledged back to the host only after the data is written to the back-end
storage at both VPLEX clusters.
IMPORTANT
VPLEX Witness does not automate fail-over for distributed volumes outside of consistency
groups.
173
VPLEX Witness
Three common types of failures that illustrate how VPLEX responds without VPLEX Witness
are described below:.
Scenario 1
Cluster 1
Scenario 2
Cluster 2
Cluster 1
Scenario 3
Cluster 2
Cluster 1
Cluster 2
Scenario 1 - Inter-cluster link outage. Both of the dual links between the clusters have an
outage. Also known as a cluster partition.
The preferred cluster (cluster-1) continues I/O
Cluster-2 suspends I/O
The existing detach rules are sufficient to prevent data unavailability. Writes at
cluster-1 are logged. When the inter-cluster link is restored, a log rebuild copies only
the logged changes to resynchronize the clusters.
Scenario 2 - Cluster-2 fails.
Cluster-1 (the preferred cluster) continues I/O.
The existing detach rules are sufficient to prevent data unavailability. Volumes are
accessible with no disruptions at cluster-1.
Writes at cluster-1 are logged. When cluster-2 is restored, and rejoins cluster-1, a log
rebuild copies only the logged changes to resynchronize cluster-2.
Scenario 3 - Cluster-1 (the preferred cluster) fails.
Cluster-2 suspends I/O (data unavailability)
VPLEX cannot automatically recover from this failure and suspends I/O at the only
operating cluster.
Recovery may require manual intervention to re-enable I/O on cluster-2. In this case,
however, it is crucial to note that if I/O is enabled on cluster-2 while cluster-1 is active
and processing I/O, data corruption will ensue once the two clusters are synchronized.
VPLEX Witness addresses Scenario 3, where the preferred cluster fails, and the
un-preferred cluster cannot continue I/O due to the configured detach rule-set.
174
VPLEX Witness
Instead, VPLEX Witness guides the surviving cluster to continue I/O, despite its
designation as the non-preferred cluster. I/O continues to all distributed volumes in all
synchronous consistency groups that do not have the no-automatic-winner detach rule.
Host applications continue I/O on the surviving cluster without any manual intervention.
When the preferred cluster fails in a Metro configuration, VPLEX Witness provides
seamless zero RTO fail-over to the surviving cluster.
Loss of Contact with VPLEX Witness - The clusters are still in contact with each other, but
one of the clusters (either preferred or non-preferred) has lost contact with VPLEX
Witness. This can occur when there is a management connectivity failure between the
VPLEX Witness host and the Management Server in the corresponding cluster. In this
scenario:
The cluster that lost connectivity with VPLEX Witness sends a call-home notification.
175
VPLEX Witness
Note that this scenario, depicted in Figure 21, is equivalent to a dual connection failure
between each cluster and VPLEX Witness. This scenario may occur either as a result of
dual connection failure or the physical failure of the host on which VPLEX Witness is
deployed.
VPLEX
Witness
VPLEX
Witness
Cluster 2
I/
O
s
ue
in
nt
Co
ds
en
sp
Su
I/
O
O
I/
O
I/
Cluster 1
Su
sp
en
ds
Co
nt
in
ue
s
Cluster 1
Cluster 2
VPLEX
Witness
VPLEX
Witness
s
ue
in
nt
Co
Cluster 1
Cluster 2
Cluster 1
I/
O
O
I/
Co
nt
in
ue
s
Cluster 2
The preferred cluster cannot receive guidance from VPLEX Witness and suspends I/O.
Remote Cluster Isolation - The preferred cluster loses contact with the remote cluster and
the non-preferred cluster loses contact with the VPLEX Witness. The preferred cluster is
connected to VPLEX Witness.
The preferred cluster continues I/O as it is still in contact with the VPLEX Witness.
The non-preferred cluster suspends I/O, as it is neither in contact with the other
cluster, nor can it receive guidance from VPLEX Witness.
Inter-Cluster Partition - Both clusters lose contact with each other, but still have access to
the VPLEX Witness. VPLEX Witness preserves the detach rule failure behaviors:
176
If the preferred cluster can not proceed because it has not fully synchronized, the
cluster suspends I/O.
VPLEX Witness
active-cluster-wins
no-automatic-winner
Active/passive
In an active/passive configuration, hosts at only one cluster write to legs of distributed
volumes located at both clusters. Failures in active/passive scenarios are handled as
follows:
Entire engine on the passive cluster fails - No loss of data or access. If either director in
the failed engine had deltas in commit or exchange, part of the delta is lost at the
passive cluster. These deltas are re-copied across the inter-cluster link from the active
cluster.
Entire engine on the active cluster fails - All data in open, closed, and exchanging
deltas is discarded. I/O is suspended at both clusters. Manual intervention is
required (resume-after-data-loss-failure command) to roll back the data image to the
last committed delta (the last consistent point in time).
Passive cluster fails - No loss of data or access. I/O continues on the active cluster. All
writes are logged. When the passive cluster is restored, and rejoins, a log rebuild
copies only the logged changes to resynchronize the passive cluster.
Active cluster fails - Dirty data is lost, and the volumes are suspended at the passive
cluster. Hosts at the active cluster experience a DU.
All writes at the passive cluster are logged. When the active cluster is restored and
rejoins VPLEX, a log rebuild copies only changes to the active cluster.
Users can fail-over applications to the passive cluster using the choose-winner and
resume-after-rollback commands.
Alternatively, users can wait for the failed cluster to be restored. When the failed
cluster is restored and rejoins, VPLEX recognizes that the failed cluster has lost data
and rolls back to the last committed delta (last time consistent image). The user must
manually re-enable I/O at the recovered cluster (resume-at-loser command).
177
VPLEX Witness
Inter-cluster link outage - No loss of data or access. I/O continues on the active
cluster. All I/O is logged. On the passive cluster, local volumes are accessible to local
hosts. When the inter-cluster link is restored, a log rebuild copies only the logged
changes to resynchronize the passive cluster.
Active/active
In an active/active configuration, hosts at both clusters write to legs of distributed
volumes at both clusters. Failures in active/active scenarios are handled as follows:
Entire engine on the either cluster fails - All data in open, closed, and exchanging
deltas is discarded. I/O is suspended at both clusters. Manual intervention
(resume-after-data-loss-failure command), is required to roll back the data image to
the last committed delta, which is the last consistent point in time.
Either cluster fails - All data in open, closed, and exchanging deltas is discarded. I/O
to volumes is suspended at both clusters. All writes are logged at the surviving
cluster. When the failed cluster is restored and rejoins, a log rebuild copies only
changes to the restored cluster.
Users can fail-over applications to the surviving cluster using the choose-winner and
resume-after-rollback commands. VPLEX rolls back the data image to the end of the
last committed delta (last consistent point in time). Applications must be restarted or
rolled back.
Users can wait for the failed cluster to be restored. When the failed cluster is restored
and rejoins, VPLEX recognizes that the failed cluster has lost data and rolls back to the
last committed delta (last time consistent image). The user must manually re-enable
I/O at the recovered cluster (resume-at-loser command).
Inter-cluster link outage - I/O suspended at both clusters. I/O at both clusters is
logged.
Users can wait for the network outage to resolve. When the inter-cluster link is
restored, a log rebuild copies only the logged changes to resynchronize the clusters.
Alternatively, users can manually choose a cluster to continue I/O. Manual
intervention (resume-after-data-loss-failure command) rolls back the data image to
the last committed delta, which is the last consistent point in time. Applications are
then re-started.
178
VPLEX Witness
Failed director's protection partner also fails - If the failed director's protection partner
fails before dirty cache recovery completes, VPLEX has lost data.
This is both a DU and a DL.
Recovery from an external backup is required.
Inter-cluster link outage during director failure recovery - A single director failure
followed by a inter-cluster link outage before failure recovery of the missing portions
of the commit delta are copied.
The cluster with the failed director did not complete committing the delta and the
image on backend storage is not a time consistent image.
If a detach rule or user command causes the unhealthy cluster to become a winner in
this state, the affected consistency groups stay suspended at that cluster.
If the unhealthy cluster (the one at which the original failure occurred) is not detached,
the user may:
Wait for the network to be restored.
No data is lost.
Accept a data rollback and resume I/O at the other, healthy cluster.
Multiple BE storage volume failures after a inter-cluster link outage - This scenario
occurs when a cluster is declared preferred during a inter-cluster link outage. When
the link recovers, the preferred cluster updates the other cluster (logging rebuild). If
the target disks on the cluster being updated fail, the consistency group suspends.
Recovery from an external backup is required.
Mirror local DR1 legs to two local arrays with a RAID-1. Local mirrors minimize the risk
of back-end visibility issues during array failures
Follow best practices for high availability when deploying VPLEX Geo configurations.
Refer to the VPLEX Product Guide.
179
VPLEX Witness
Mgmt Connectivity
----------------ok
failed
ok
VPlexcli:/cluster-witness> ll components/*
/cluster-witness/components/cluster-1:
Name
Value
----------------------- -----------------------------------------------------admin-state
enabled
diagnostic
WARNING: Current state of cluster-1 is
remote-cluster-isolated-or-dead (last state change: 0
days, 15 secs ago; last message from server: 0 days, 0
secs ago.)
id
1
management-connectivity ok
operational-state
remote-cluster-isolated-or-dead
/cluster-witness/components/cluster-2:
Name
Value
----------------------- -----------------------------------------------------admin-state
unknown
diagnostic
WARNING: Cannot establish connectivity with cluster-2
to query diagnostic information.
id
2
management-connectivity failed
operational-state
/cluster-witness/components/server:
Name
Value
----------------------- -----------------------------------------------------admin-state
enabled
diagnostic
WARNING: Current state is cluster-unreachable (last
state change: 0 days, 15 secs ago.) (last time of
communication with cluster-1: 0 days, 0 secs ago.)
(last time of communication with cluster-2: 0 days,
34 secs ago.)
id
management-connectivity ok
operational-state
cluster-unreachable
180
VPLEX Witness
Installation considerations
It is important to deploy the VPLEX Witness server VM in a separate failure domain than
either cluster.
A failure domain is a set of entities effected by the same set of faults. The scope of the
failure domain depends on the set of fault scenarios that can be tolerated in a given
environment. For example:
If the two clusters are deployed on different floors of the same data center, deploy the
VPLEX Witness Server VM on a separate floor.
If the two clusters are deployed in two different data centers, deploy the VPLEX
Witness Server VM in the third data center.
If the VPLEX Witness Server VM cannot be deployed in a failure domain independent from
either of VPLEX cluster, VPLEX Witness should not be installed.
The following are additional recommendations for installing and deploying VPLEX Witness:
Connect the VPLEX Witness Server VM to a power source that is independent of power
sources supplying power to the clusters.
Enable the BIOS Virtualization Technology (VT) extension on the ESX host where the
VPLEX Witness Server VM is installed to ensure performance of the VPLEX Witness
Server VM.
The IP Management network (connecting the cluster management servers and the
VPLEX Witness Server VM) must be physically separate from the inter-cluster
networks.
Latency in the network between VPLEX Witness Server VM and the cluster
management servers should not exceed 1 second (round trip).
Complete instructions to install and deploy the VPLEX Witness Server VM are available in
the VPLEX generator.
CLI context
The VPLEX Witness software includes a client on each of the VPLEX clusters. VPLEX
Witness does not appear in the CLI until the client has been configured.
Complete instructions to configure the VPLEX Witness client are available in the VPLEX
generator.
181
VPLEX Witness
Descriptions of the CLI commands to configure the VPLEX Witness client are available in
the VPLEX CLI Guide.
Once VPLEX Witness is installed and configured on the clusters, a CLI context called
cluster-witness appears in the CLI context tree:
VPlexcli:/cluster-witness> ls
Attributes:
Name
Value
------------------------admin-state
enabled
private-ip-address
128.221.254.3
public-ip-address
10.31.25.45
Contexts:
components
The VPLEX Witness Server VM fails, and is not expected to be restored quickly.
Connectivity between both clusters and the VPLEX Witness Server VM fails, and is not
expected to be restored quickly.
There is no impact on I/O to distributed volumes. Both clusters send a call-home
notification to indicate that they have lost connectivity with the VPLEX Witness Server.
However, an additional failure (cluster failure or inter-cluster link outage) may result in
a Data Unavailability.
When you disable the VPLEX Witness client on the clusters, all distributed volumes in
consistency groups use their consistency group-level detach rules to determine which
cluster is preferred.
Complete instructions to enable/disable the VPLEX Witness client are available in the
VPLEX generator.
Descriptions of the CLI commands to enable/disable the VPLEX Witness client are
available in the VPLEX CLI Guide.
182
VPLEX Witness
Descriptions of the CLI commands to renew the VPLEX Witness security certificate are
available in the VPLEX CLI Guide.
Normal operation
Use the ls command in /cluster-witness context to display the state and IP addresses.
Use the ll components/ command in /cluster-witness context to display status of
components (VPLEX clusters and the VPLEX Witness Server).
Use the ll components/* command in /cluster-witness context to display detailed
information (including diagnostics)
Table 15 lists the fields in the output of these commands.
Table 15 VPLEX Witness display fields (page 1 of 4)
Field
Description
Name
Name of component,
For VPLEX clusters name assigned to cluster.
For VPLEX Witness server server.
id
ID of a VPLEX cluster.
Always blank - for Witness server.
admin state
private-ip-address
Private IP address of the VPLEX Witness Server VM used for VPLEX Witness-specific traffic.
public-ip-address
Public IP address of the VPLEX Witness Server VM used as an endpoint of the IPsec tunnel.
183
VPLEX Witness
184
String generated by CLI based on the analysis of the data and state information reported
by the corresponding component.
WARNING: Cannot establish connectivity with VPLEX Witness Server to query diagnostic
information. - VPLEX Witness Server or one of the clusters is unreachable.
Local cluster-x hasn't yet established connectivity with the server - The cluster has never
connected to VPLEX Witness Server.
Remote cluster-x hasn't yet established connectivity with the server - The cluster has
never connected to VPLEX Witness Server.
Cluster-x has been out of touch from the server for X days, Y secs - VPLEX Witness Server
has not received messages from a given cluster for longer than 60 seconds.
VPLEX Witness server has been out of touch for X days, Y secs - Either cluster has not
received messages from VPLEX Witness Server for longer than 60 seconds.
VPLEX Witness is not enabled on component-X, so no diagnostic information is available VPLEX Witness Server or either of the clusters is disabled.
VPLEX Witness
185
VPLEX Witness
Reachability of the specified Witness component over the IP management network from
the management server where the CLI command is run.
ok - The component is reachable
failed - The component is not reachable
Normal operation
During normal operation (no failures), the output of display commands from the
cluster-witness CLI context are as follows:
VPlexcli:/cluster-witness> ls
Attributes:
Name
Value
------------------------admin-state
enabled
private-ip-address
128.221.254.3
public-ip-address
10.31.25.45
Contexts:
components
VPlexcli:/cluster-witness> ll components/
/cluster-witness/components:
Name
ID Admin State Operational State
---------- -- ----------- ------------------cluster-1
1
enabled
in-contact
cluster-2
2
enabled
in-contact
server
enabled
clusters-in-contact
Mgmt Connectivity
----------------ok
ok
ok
VPlexcli:/cluster-witness> ll components/*
/cluster-witness/components/cluster-1:
Name
Value
----------------------- -----------------------------------------------------admin-state
enabled
diagnostic
INFO: Current state of cluster-1 is in-contact (last
state change: 0 days, 13056 secs ago; last message
from server: 0 days, 0 secs ago.)
id
1
management-connectivity ok
operational-state
in-contact
/cluster-witness/components/cluster-2:
Name
Value
----------------------- -----------------------------------------------------admin-state
enabled
diagnostic
INFO: Current state of cluster-2 is in-contact (last
state change: 0 days, 13056 secs ago; last message
from server: 0 days, 0 secs ago.)
id
2
management-connectivity ok
operational-state
in-contact
/cluster-witness/components/server:
Name
Value
----------------------- -----------------------------------------------------admin-state
enabled
diagnostic
INFO: Current state is clusters-in-contact (last state
change: 0 days, 13056 secs ago.) (last time of
communication with cluster-2: 0 days, 0 secs ago.)
(last time of communication with cluster-1: 0 days, 0
186
VPLEX Witness
secs ago.)
ok
clusters-in-contact
id
management-connectivity
operational-state
Figure 22 VPLEX Witness Server and VPLEX Witness Server-to-clusters connectivity failures
The cluster that lost connectivity with VPLEX Witness Server sends a call-home 30 seconds
after connectivity is lost.
The following example shows the output of the ll commands when the VPLEX Witness
Server VM loses contact with cluster-2 (A2 in Figure 22):
VPlexcli:/cluster-witness> ll components/
/cluster-witness/components:
Name
ID Admin State Operational State
---------- -- ----------- ------------------cluster-1
1
enabled
in-contact
cluster-2
2
enabled
in-contact
server
enabled
clusters-in-contact
Mgmt Connectivity
----------------ok
ok
ok
VPlexcli:/cluster-witness> ll components/*
/cluster-witness/components/cluster-1:
Name
Value
----------------------- -----------------------------------------------------admin-state
enabled
diagnostic
INFO: Current state of cluster-1 is in-contact (last
state change: 0 days, 13439 secs ago; last message
from server: 0 days, 0 secs ago.)
id
1
management-connectivity ok
operational-state
in-contact
/cluster-witness/components/cluster-2:
Name
Value
----------------------- ------------------------------------------------------
187
VPLEX Witness
admin-state
diagnostic
id
management-connectivity
operational-state
enabled
INFO: Current state of cluster-2 is in-contact (last
state change: 0 days, 13439 secs ago; last message
from server: 0 days, 2315 secs ago.)
2
ok
in-contact
/cluster-witness/components/server:
Name
Value
----------------------- -----------------------------------------------------admin-state
enabled
diagnostic
INFO: Current state is clusters-in-contact (last state
change: 0 days, 13439 secs ago.) (last time of
communication with cluster-1: 0 days, 0 secs ago.)
(last time of communication with cluster-2: 0 days,
2315 secs ago.)
id
management-connectivity ok
operational-state
clusters-in-contact
Both clusters send a call-home indicating that they have lost connectivity with the VPLEX
Witness Server.
Note: If an additional inter-cluster link or cluster failure occurs, the system is at risk of data
unavailability unless manual action is taken to disable VPLEX witness.
The following example shows the output of the ll commands when the VPLEX Witness
Server VM loses contact with both clusters:
Note: The cluster-witness CLI context on cluster-2 shows the same loss of connectivity to
the CSWS.
188
VPLEX Witness
VPlexcli:/cluster-witness> ll components/
/cluster-witness/components:
Name
ID Admin State Operational State
---------- -- ----------- ----------------cluster-1
1
enabled
in-contact
cluster-2
2
enabled
in-contact
server
unknown
-
Mgmt Connectivity
----------------ok
ok
failed
VPlexcli:/cluster-witness> ll components/*
/cluster-witness/components/cluster-1:
Name
Value
----------------------- -----------------------------------------------------admin-state
enabled
diagnostic
WARNING: Current state of cluster-1 is in-contact
(last state change: 0 days, 94 secs ago; last message
from server: 0 days, 34 secs ago.)
id
1
management-connectivity ok
operational-state
in-contact
/cluster-witness/components/cluster-2:
Name
Value
----------------------- -----------------------------------------------------admin-state
enabled
diagnostic
WARNING: Current state of cluster-2 is in-contact
(last state change: 0 days, 94 secs ago; last message
from server: 0 days, 34 secs ago.)
id
2
management-connectivity ok
operational-state
in-contact
/cluster-witness/components/server:
Name
Value
----------------------- -----------------------------------------------------admin-state
unknown
diagnostic
WARNING: Cannot establish connectivity with Cluster
Witness Server to query diagnostic information.
id
management-connectivity failed
operational-state
-
Failure of cluster-1
Failure of cluster-2
189
VPLEX Witness
Figure 24 Single failures; inter-cluster link or one cluster resulting in data unavailability
When a single failure occurs, each cluster receives guidance from VPLEX Witness Server
and sends a call-home indicating the guidance and changes its internal state.
The following examples show the output of the ls and ll commands following:
Mgmt Connectivity
----------------ok
ok
ok
VPlexcli:/cluster-witness> ll components/*
/cluster-witness/components/cluster-1:
Name
Value
----------------------- -----------------------------------------------------admin-state
enabled
diagnostic
WARNING: Current state of cluster-1 is
cluster-partition (last state change: 0 days, 56 secs
ago; last message from server: 0 days, 0 secs ago.)
id
1
management-connectivity ok
operational-state
cluster-partition
/cluster-witness/components/cluster-2:
Name
Value
----------------------- -----------------------------------------------------admin-state
enabled
diagnostic
WARNING: Current state of cluster-2 is
cluster-partition (last state change: 0 days, 57 secs
ago; last message from server: 0 days, 0 secs ago.)
id
2
190
VPLEX Witness
management-connectivity
operational-state
ok
cluster-partition
/cluster-witness/components/server:
Name
Value
----------------------- -----------------------------------------------------admin-state
enabled
diagnostic
WARNING: Current state is cluster-partition (last
state change: 0 days, 57 secs ago.) (last time of
communication with cluster-1: 0 days, 0 secs ago.)
(last time of communication with cluster-2: 0 days, 0
secs ago.)
id
management-connectivity ok
operational-state
cluster-partition
cluster-1 failure
The following example shows the output of the ll commands following a failure of cluster-1
(B3 in Figure 24):
VPlexcli:/cluster-witness> ll components/
Name
ID Admin State Operational State
--------- -- ----------- ------------------------------cluster-1 1
unknown
cluster-2 2
enabled
remote-cluster-isolated-or-dead
server
enabled
cluster-unreachable
Mgmt Connectivity
----------------failed
ok
ok
VPlexcli:/cluster-witness> ll components/*
/cluster-witness/components/cluster-1:
Name
Value
----------------------- -----------------------------------------------------admin-state
unknown
diagnostic
WARNING: Cannot establish connectivity with cluster-1
to query diagnostic information.
id
1
management-connectivity failed
operational-state
-
/cluster-witness/components/cluster-2:
Name
Value
----------------------- -----------------------------------------------------admin-state
enabled
diagnostic
WARNING: Current state of cluster-2 is
remote-cluster-isolated-or-dead (last state change: 0
days, 49 secs ago; last message from server: 0 days, 0
secs ago.)
id
2
management-connectivity ok
operational-state
remote-cluster-isolated-or-dead
/cluster-witness/components/server:
Name
Value
----------------------- -----------------------------------------------------admin-state
enabled
diagnostic
WARNING: Current state is cluster-unreachable (last
state change: 0 days, 49 secs ago.) (last time of
communication with cluster-2: 0 days, 0 secs ago.)
(last time of communication with cluster-1: 0 days, 59
secs ago.)
id
VPLEX Witness operation
191
VPLEX Witness
management-connectivity
operational-state
ok
cluster-unreachable
cluster-2 failure
The following example shows the output of the ll commands following a failure of cluster-2
(B2 in Figure 24):
VPlexcli:/cluster-witness> ll components/
/cluster-witness/components:
Name
ID Admin State Operational State
---------- -- ----------- ------------------------------cluster-1
1
enabled
remote-cluster-isolated-or-dead
cluster-2
2
unknown
server
enabled
cluster-unreachable
Mgmt Connectivity
----------------ok
failed
ok
VPlexcli:/cluster-witness> ll components/*
/cluster-witness/components/cluster-1:
Name
Value
----------------------- -----------------------------------------------------admin-state
enabled
diagnostic
WARNING: Current state of cluster-1 is
remote-cluster-isolated-or-dead (last state change: 0
days, 15 secs ago; last message from server: 0 days, 0
secs ago.)
id
1
management-connectivity ok
operational-state
remote-cluster-isolated-or-dead
/cluster-witness/components/cluster-2:
Name
Value
----------------------- -----------------------------------------------------admin-state
unknown
diagnostic id
2
management-connectivity failed
operational-state
/cluster-witness/components/server:
Name
Value
----------------------- -----------------------------------------------------admin-state
enabled
diagnostic
WARNING: Current state is cluster-unreachable (last
state change: 0 days, 15 secs ago.) (last time of
communication with cluster-1: 0 days, 0 secs ago.)
(last time of communication with cluster-2: 0 days,
34 secs ago.)
id
management-connectivity ok
operational-state
cluster-unreachable
diagnostic
Note: Status can be used to determine that cluster-2 has failed and that Consistency
Groups may need manual intervention.
VPLEX Witness
VPLEX Witness server guides the cluster that is still connected to continue I/O.
In D1 in Figure 25, cluster-2 is isolated. VPLEX Witness Server guides cluster-1 to proceed
with I/O on all distributed volumes in all consistency groups regardless of preference.
Cluster-2 suspends I/O to all distributed volumes in all consistency groups regardless of
preference.
The following examples show the output of the ll commands:
Mgmt Connectivity
----------------ok
failed
ok
VPlexcli:/cluster-witness> ll components/*
/cluster-witness/components/cluster-1:
Name
Value
----------------------- -----------------------------------------------------admin-state
enabled
diagnostic
WARNING: Current state of cluster-1 is
remote-cluster-isolated-or-dead (last state change: 0
days, 35 secs ago; last message from server: 0 days, 0
secs ago.)
id
1
management-connectivity ok
operational-state
remote-cluster-isolated-or-dead
/cluster-witness/components/cluster-2:
Name
Value
VPLEX Witness operation
193
VPLEX Witness
----------------------admin-state
diagnostic
id
management-connectivity
operational-state
-----------------------------------------------------enabled
WARNING: Cannot establish connectivity with cluster-2
to query diagnostic information.
2
failed
-
/cluster-witness/components/server:
Name
Value
----------------------- -----------------------------------------------------admin-state
enabled
diagnostic
WARNING: Current state is cluster-unreachable (last
state change: 0 days, 35 secs ago.) (last time of
communication with cluster-1: 0 days, 0 secs ago.)
(last time of communication with cluster-2: 0 days,
103 secs ago.) Remote cluster has been out of touch
from the server for 0 days, 103 secs.
id
management-connectivity ok
operational-state
cluster-unreachable
Mgmt Connectivity
----------------failed
failed
failed
VPlexcli:/cluster-witness> ll components/*
/cluster-witness/components/cluster-1:
Name
Value
----------------------- -----------------------------------------------------admin-state
unknown
diagnostic
WARNING: Cannot establish connectivity with cluster-1
to query diagnostic information.
id
1
management-connectivity failed
operational-state
/cluster-witness/components/cluster-2:
Name
Value
----------------------- -----------------------------------------------------admin-state
unknown
diagnostic
WARNING: Cannot establish connectivity with cluster-1
to query diagnostic information.
id
2
management-connectivity failed
operational-state
/cluster-witness/components/server:
Name
Value
----------------------- -----------------------------------------------------admin-state
unknown
diagnostic
WARNING: Cannot establish connectivity with Cluster
Witness Server to query diagnostic information.
id
management-connectivity failed
operational-state
-
194
VPLEX Witness
Mgmt Connectivity
----------------ok
ok
ok
VPlexcli:/cluster-witness> ll components/*
/cluster-witness/components/cluster-1:
Name
Value
----------------------- -----------------------------------------------------admin-state
enabled
diagnostic
WARNING: Current state of cluster-1 is
remote-cluster-isolated-or-dead (last state change: 0
days, 357 secs ago; last message from server: 0 days,
0 secs ago.)
id
1
management-connectivity ok
operational-state
remote-cluster-isolated-or-dead
/cluster-witness/components/cluster-2:
Name
Value
----------------------- -----------------------------------------------------admin-state
enabled
diagnostic
WARNING: Current state of cluster-2 is
local-cluster-isolated (last state change: 0 days, 9
secs ago; last message from server: 0 days, 0 secs
ago.)
id
2
management-connectivity ok
operational-state
local-cluster-isolated
/cluster-witness/components/server:
Name
Value
----------------------- -----------------------------------------------------admin-state
enabled
diagnostic
WARNING: Current state is cluster-unreachable (last
state change: 0 days, 357 secs ago.) (last time of
communication with cluster-1: 0 days, 0 secs ago.)
(last time of communication with cluster-2: 0 days, 0
secs ago.)
id
management-connectivity ok
operational-state
cluster-unreachable
195
VPLEX Witness
196
CHAPTER 11
Cache vaults
This chapter describes cache vaulting on VPLEX and explains how to recover after a cache
vault.
Write-back cache mode ensures data durability by storing user data into the cache
memory of the director that received the I/O, then placing a protection copy of this
data on another director in the local cluster before acknowledging the write to the
host. This ensures the data is protected in two independent memories. The data is
later destaged to back-end storage arrays that provide the physical storage media.
If a power failure on a VPLEX Geo cluster (using write-back cache mode) should occur, the
data in cache memory might be at risk. In the event of data at risk from power failure in a
VPLEX Geo configuration, each VPLEX director copies its dirty cache data to the local solid
state storage devices (SSDs) using a process known as cache vaulting. Dirty cache pages
are pages in a director's memory that have not been written to back-end storage but were
acknowledged to the host. Dirty cache pages also include the copies protected on a
second director in the cluster. After each director vaults its dirty cache pages, VPLEX then
shuts down the directors firmware.
Note: Although there is no dirty cache data in VPLEX Local or VPLEX Metro configurations,
vaulting is still necessary to quiesce all I/O when data is at risk due to power failure. This
is done to minimize the risk of metadata corruption.
Once power is restored, the VPLEX system startup program initializes the hardware and
the environmental system, checks the data validity of each vault, and unvaults the data.
The process of system recovery and unvault depends largely on the configuration of the
system:
In a VPLEX Local or VPLEX Metro configuration, the cluster unvaults without recovering
any vault data because there was no data to vault.
Cache vaults
197
Cache vaults
In a VPLEX Geo system, if the remote cluster proceeded on its own (either because it
was the only active cluster at the time or because of administrative action), VPLEX
discards the vault and does not restore memory. Additionally, the
auto-resume-at-loser parameter affects whether the recovering cluster starts
processing I/O. By default this parameter is set to false for asynchronous consistency
groups. This means that by default the recovering cluster discards its vault and then
suspends and waits for manual intervention.
In a VPLEX Geo system, if the remote cluster waited, the vault is recovered, the two
clusters get back in touch and continue processing I/O.
When you resume operation of the cluster, if any condition is not safe, the system does
not resume normal status and calls home for diagnosis and repair. This allows EMC
Customer Support to communicate with the VPLEX system and restore normal system
operations.
Vaulting can be used in two scenarios:
Data at risk due to power failure: VPLEX monitors all components that provide power
to the VPLEX cluster. If it detects AC power loss in accordance with the specifications
in Power failures that cause vault, in order to avoid a possible data loss it takes a
conservative approach and initiates a cluster wide vault if the power loss exceeds 30
seconds.
When performing maintenance activities on a VPLEX Geo configuration, service personnel
must not remove the power in one or more engines unless both directors in those
engines have been shutdown and are no longer monitoring power. Failure to do so, leads
to data unavailability in the affected cluster. To avoid unintended vaults, always follow
official maintenance procedures.
Under normal conditions, the SPS batteries can support two consecutive vaults; this
ensures, that the system can resume I/O immediately after the first power failure, and that
it can still vault if there is a second power failure.
For information on the redundant and backup power supplies in VPLEX, refer to the
Hardware Overview chapter of the EMC VPLEX Product Guide.
Related Information
For additional information on cache vaulting:
198
Information
Reference
Related Commands
Cache vaults
Information
Reference
Vaulting is introduced.
On all configurations, vaulting is triggered if all the following conditions are present:
AC power is lost (due to power failure or faulty hardware) in power zone A from
engine X,
AC power is lost (due to power failure or faulty hardware) in power zone B from
engine Y,
(X and Y would be the same in a single engine configuration but they may or may
not be the same in dual or quad engine configurations.)
Both conditions persist for more than 30 seconds.
Release 5.1:
199
Cache vaults
In a VPLEX Local or VPLEX Metro configuration, vaulting is triggered if all the following
conditions are present:
AC power is lost (due to power failure or faulty hardware) or becomes unknown
in the minimum number of directors required for the cluster to be operational.
Condition persist for more than 30 seconds.
200
Cache vaults
Power
restored
Ride-out Power
Manual
vault
Power
ride-out
expired
Quiesce IO
Dirty cache
pages frozen
Write Vault
Vault
written
Stop director firmware
VPLX-000472
When a cluster detects a condition described in Power failures that cause vault, VPLEX
triggers the cluster to enter a 30 second ride-out phase. This delays the (irreversible)
decision to vault, allowing for a timely return of AC input to avoid vaulting altogether.
During the ride-out phase, all mirror rebuilds and migrations pause, and the cluster
disallows new configuration changes on the local cluster, to prepare for a possible vault.
If the power is restored prior to the 30 second ride-out, all mirror rebuilds and migrations
resume, and configuration changes are once again allowed.
Power ride-out is not necessary when a manual vault has been requested. However,
similar to the power ride-out phase, manual vaulting stops any mirror rebuilds and
migrations and disallows any configuration changes on the local cluster.
Once the cluster has decided to proceed with vaulting the dirty cache, the vaulting cluster
quiesces all I/Os and disables inter-cluster links to isolate itself from the remote cluster.
These steps are required to freeze the directors dirty cache in preparation for vaulting.
201
Cache vaults
Once the dirty cache (if any) is frozen, each director in the vaulting cluster isolates itself
from the other directors and starts writing. When finished writing to its vault, the director
stops its firmware.
This entire process is completed within the time parameters supported by the stand-by
power supplies.
It is important to ensure that the cluster is shutdown in an organized fashion and to save
any remaining battery charge so that recharge completes faster when the cluster is
restarted.
Refer to the following procedures (published in the EMC VPLEX generator at EMC Online
Support) for the procedures to safely shutdown and restart a VPLEX cluster after a vault:
202
Cache vaults
No
Asynchronous
CG present?
Yes
Unvaulting
Reading
vault complete
Yes
Unvault
quorum
gained
No
Unvault quorum waiting
Override
unvault quorum
Unvault
quorum gained
Recover vault
Recovery complete
Vault inactive
VPLX-000471
At the start of cluster recovery, VPLEX checks to see if there are any configured
asynchronous consistency groups. If there are none (as would be the case in VPLEX Local
and VPLEX Metro configurations), the entire unvault recovery process is skipped.
As the directors boot up, each director reads the vaulted dirty data from its respective
vault disk. Once the directors have completed reading the vault, each director evaluates if
the unvaulted data can be recovered.
The cluster then evaluates if it has gained unvault quorum. The unvault quorum is the set
of directors that vaulted their dirty cache data during the last successful cluster wide vault.
In order to recover their vaulted data, which is required to preserve cache coherency and
avoid data loss, these directors must boot and rejoin the cluster. If the unvault quorum is
achieved the cluster proceeds to recover the vault.
203
Cache vaults
If the cluster determines that it has not gained unvault quorum, it waits indefinitely for the
required directors to boot up and join the cluster. During the waiting period, the cluster
remains in the unvault quorum wait state.
After 30 minutes in the unvault quorum wait state, the cluster generates a call home
indicating the current state of the cluster and indicating that manual intervention is
needed to allow the cluster to process I/O.
Once the cluster enters the unvault quorum wait state it cannot proceed to the recovery
phase until any of the following events happen:
You issue the override unvault quorum command and agree to accept a possible data
loss
Refer to the VPLEX generator troubleshooting procedures for cache vaulting for
instructions on how recover in this scenario. See the VPLEX CLI Guide for details on the
use of the override unvault quorum command.
Successful Recovery
VPLEX Geo can handle one invalid or missing vault because each director has vaulted a
copy of each dirty cache page of its protection partner. The cache can be recovered as long
as the original dirty cache vault or its protected copy is available.
An invalid vault can be caused by:
If the cluster determines it has sufficient valid vaults, it proceeds with recovery of the
vaulted data into the distributed cache. In this scenario the unvaulted cluster looks like a
cluster that has recovered after an inter-cluster link outage as no data is lost on the
vaulting cluster. VPLEX behavior following this recovery process depends on how the
detach rules were configured for each asynchronous consistency group.
Refer to Chapter 9, Consistency Groups.
Unsuccessful Recovery
If the cluster determines that more than one invalid vault is present, the cluster discards
the vault and reports a data loss. In this scenario the unvaulted cluster looks like a cluster
that has recovered after a cluster failure. The cluster still waits for all configured directors
to boot and rejoin the cluster. The volumes are marked as recovery-error and refuse I/O. If
one volume of a consistency group is marked recovery-error, all other volumes of that
consistency group must also refuse I/O.
204
CHAPTER 12
RecoverPoint
This chapter provides an overview of the RecoverPoint product family and how
RecoverPoint can be deployed with VPLEX Local and Metro configurations.
IPv6 example:
VPlexcli:/> rp rpa-cluster add -o 3ffe:80c0:22c:803c:211:43ff:fede:234
-u admin -c cluster-1
Enter rpa-cluster administrative password: Admin-password
Enter rpa-cluster administrative password again for verification:
Admin-password
Once the RecoverPoint cluster is added, a CLI context called recoverpoint appears in the
CLI context tree:
VPlexcli:/> ll recoverpoint/
/recoverpoint:
Name
Description
------------ ----------------------------------------------------------------rpa-clusters Contains all the 'clusters' of RecoverPoint Appliances registered
in the system.
VPlexcli:/> cd recoverpoint/rpa-clusters/
VPlexcli:/recoverpoint/rpa-clusters> ll
RPA Host
VPLEX Cluster RPA Site
RPA ID
RPA Version
RecoverPoint
205
RecoverPoint
------------10.108.65.218
------------cluster-1
--------------Belvedere_Right
-----RPA 1
------------4.0.SP2(m.24)
VPlexcli:/recoverpoint/rpa-clusters/10.108.65.218> ll
Attributes:
Name
---------------------admin-username
config-changes-allowed
rp-health-indications
rp-health-status
rp-software-serial-id
rpa-host
rpa-id
rpa-site
rpa-version
vplex-cluster
Contexts:
Name
-----------------consistency-groups
volumes
Value
---------------------------------------------admin
true
[Problem detected with RecoverPoint splitters]
error
10.108.65.218
RPA 1
Belvedere_Right
4.0.SP2(m.24)
cluster-1
Description
----------------------------------------------------------Contains all the RecoverPoint consistency groups which
consist of copies local to this VPLEX cluster.
Contains all the distributed virtual volumes with a local
extent and the local virtual volumes which are used by this
RPA cluster for RecoverPoint repository and journal volumes
and replication volumes.
VPlexcli:/recoverpoint/rpa-clusters/10.108.65.218/volumes> ll
Name
RPA Site
RP Type
RP Role
RP Group
VPLEX Group Capacity
----------------------- ------------------- -------- ------------------ ------------------------------- ------------- ----------- -------- ---------- ---------- -------Belvedere_Symm_FK_Raid1_
Prod_5G_vol_0006_vol
Belvedere_Right Repository
REPO_CG_RIGHT
5G
SINGLE_LEG_1x1_R0_10_vol
Belvedere_Right Journal
TEST
SV_Expansion_CG1 2G
SINGLE_LEG_1x1_R0_11_vol
Belvedere_Right Journal
TEST
SV_Expansion_CG1 2G
SINGLE_LEG_1x1_R0_12_vol
Belvedere_Right Journal
TEST
SV_Expansion_CG1 2G
SINGLE_LEG_1x1_Rc_1_vol
Belvedere_Right Journal
TEST
SV_Expansion_CG1 2G
SINGLE_LEG_1x1_Rc_2_vol
Belvedere_Right Journal
TEST
SV_Expansion_CG1 2G
vol_0295_vol
Belvedere_Right Replication Production TEST Test_Bel
1.5G
Source
.
.
VPlexcli:/recoverpoint/rpa-clusters/10.108.65.218/consistency-groups> ll
Name
---TEST
VPlexcli:/recoverpoint/rpa-clusters/10.108.65.218/consistency-groups/TEST> ll
Attributes:
Name
--------------------distributed-group
enabled
preferred-primary-rpa
production-copy
uid
Contexts:
Name
----------------
206
Value
-------false
true
RPA4
prod
40e0ae42
Description
-------------------------------------------------------------
RecoverPoint
copies
links
replication-sets
VPlexcli:/recoverpoint/rpa-clusters/10.108.65.218/consistency-groups/TEST/copies> ll
Name
------prod
replica
VPlexcli:/recoverpoint/rpa-clusters/10.108.65.218/consistency-groups/TEST/copies> ll prod/
/recoverpoint/rpa-clusters/10.108.65.218/consistency-groups/TEST/copies/prod:
Name
Value
-------------------- --------------------------------------------------------enabled
true
image-access-enabled journal-volumes
[SINGLE_LEG_1x1_R0_10_vol, SINGLE_LEG_1x1_R0_11_vol,
SINGLE_LEG_1x1_R0_12_vol, SINGLE_LEG_1x1_Rc_1_vol,
SINGLE_LEG_1x1_Rc_2_vol]
role
Production Source
VPlexcli:/recoverpoint/rpa-clusters/10.108.65.218/consistency-groups/TEST> ll
replication-sets/
/recoverpoint/rpa-clusters/10.108.65.218/consistency-groups/TEST/replication-sets:
Name
----RSet0
/recoverpoint/rpa-clusters/10.108.65.218/consistency-groups/TEST/replication-sets/RSet0:
Name
Value
------------ -------------------------------------size
1.5G
user-volumes [DC3_2GB_data_404 (403), vol_0295_vol]
Configuration/operation guidelines
In VPLEX Metro configurations, RecoverPoint Appliances (RPAs) can be configured at only
one VPLEX cluster. Data from the cluster where the RPAs are configured is replicated to the
peer VPLEX cluster (by VPLEX), and to a third site (by RecoverPoint).
Virtual image access is not supported.
Device migrations between two VPLEX clusters are not supported if one leg of the device is
replicated by RecoverPoint.
If a VPLEX director and a RecoverPoint Appliance are restarted at the same time, a full
sweep may occur.
An RPA cluster supports up to 5,000 volumes exported to it from all storage arrays that it
can see, and up to 280,000 ITL paths.
Configuration/operation guidelines
207
RecoverPoint
A local volume on the VPLEX cluster where the RPAs are configured
Cannot be distributed
When VPLEX is installed into an existing RecoverPoint configuration, the repository volume
is already configured. There is no procedure to move a non-VPLEX repository volume.
When installing VPLEX into an existing RecoverPoint configuration, continue using the
non-VPLEX repository volume. When the storage array that hosts the repository volume is
refreshed, move the repository volume to VPLEX-hosted storage.
Once the repository volume is in VPLEX, it can be moved as needed.
Replica volumes
Performance of replica volumes should be the same or faster than their associated
production volumes.
RecoverPoint CDP requires one replica volume on the production site for each
production volume.
RecoverPoint CRR requires one replica volume on the remote site for each production
volume.
RecoverPoint CLR requires one replica volume on production site AND one on the
remote site for each production volume.
Journal volumes
Performance is crucial for journal volumes.
Configure journal volumes on the fastest storage available.
RecoverPoint journal volumes must be local volumes on the VPLEX cluster where the RPAs
are configured.
RecoverPoint journal volumes cannot be distributed volumes.
Refer to the EMC RecoverPoint Administrators Guide for information about sizing journal
volumes.
208
RecoverPoint
VPLEX volumes
Only VPLEX volumes that are members of a VPLEX consistency group with the
recoverpoint-enabled attribute set to true can be replicated by RecoverPoint. See
Recoverpoint-enabled on page 142.
Only local and distributed volumes can be replicated using RecoverPoint - not remote
volumes.
The following volumes may not be added to a VPLEX storage view created to support
RecoverPoint:
Remote volumes
Volumes already in a different RecoverPoint storage view
Volumes in VPLEX consistency groups whose members are in a different
RecoverPoint storage view
Note: A RecoverPoint cluster may take up to 2 minutes to take note of changes to VPLEX
consistency groups.
Wait for 2 minutes after making the following changes before creating or changing a
RecoverPoint consistency group:
Only VPLEX volumes that are members of VPLEX consistency groups with the
recoverpoint-enabled attribute set to true can be replicated by RecoverPoint. See
Recoverpoint-enabled on page 142.
The cluster at which RPAs are configured must be the preferred cluster as designated
by the VPLEX consistency group detach rules. See Detach-rule on page 140.
Administrators and VPLEX Witness can override the preferred cluster designation
during inter-cluster link outages. See Failures in Metro Systems: With VPLEX Witness
on page 175.
For Production source volumes, this allows normal high availability behavior. Writes to
the preferred cluster are logged for the duration of the link outage. When the link is
restored, a log rebuild copies only the logged changes to the splitter.
No snapshots are kept during the rebuild.
Configuration/operation guidelines
209
RecoverPoint
All Production source volumes for a given RecoverPoint consistency group should be
in one VPLEX consistency group. All the Replica volumes for a given RecoverPoint
consistency group should be in another VPLEX consistency group (not the same VPLEX
consistency group as the Production source volumes).
If a RecoverPoint consistency group with a Production source volume and a Replica
volume fails over, the Replica becomes the Production source. Having the replica in a
separate VPLEX consistency group assures that the hosts that the replica now services
(as the Production source volume) is write-order ensured from a VPLEX standpoint.
If a virtual volume is in a RP consistency group with the same role as other virtual
volumes in the VPLEX consistency group, it should be added to the VPLEX consistency
group.
VPLEX NDU
IMPORTANT
Confirm that the new version of VPLEX GeoSynchrony is compatible with the version of
RecoverPoint. Refer to the EMC VPLEX Release Notes for the GeoSynchrony release.
When a VPLEX cluster is being upgraded, it puts any RecoverPoint system it can talk to into
maintenance mode.
When the upgrade is complete, VPLEX takes the RecoverPoint system out of maintenance
mode.
When two VPLEX systems are connected to the same RecoverPoint cluster, the VPLEX
systems cannot be upgraded at the same time. Upgrade on the second VPLEX fails until
upgrade on the first VPLEX completes.
Note: All RecoverPoint consistency groups must be changed to asynchronous mode during
VPLEX NDU.
Do not upgrade VPLEX systems connected to the same RecoverPoint replication
environment at the same time.
Storage failures
210
If a storage volume (at the VPLEX cluster where the RPAs are configured) fails, the host
is still writing to the volume, and those writes are not included in the snapshots of the
VPLEX consistency group, and won't arrive until the rebuild occurs.
RecoverPoint
In the case of a conflicting detach, as long as there is an active splitter (i.e. replication
is not disabled), its host cluster must be the "datawin" target if the administrator
chooses to resolve the conflict
In the case of storage failure of the mirror leg of a replica volume at the RecoverPoint
site, there must be a rebuild to correct the data on the replica. This will put the replica
volume into tracking mode (overriding fail-all), and rebuild I/O is allowed to proceed.
Stores the current state of the replica storage in the replica journal
At a rate of 91MB/sec
Zoning
Refer to RecoverPoint Deploying with VPLEX Technical Notes for detailed information.
The following information is overview and guidance only.
Zoning is specific to site and topology. Best practice is to use the RPA Fibre Channel ports
as both initiators and targets to achieve maximum performance, redundancy, and optimal
use of resource.
If, due to Initiator-Target LUN (ITL) limitations or other non-RecoverPoint considerations,
you need to zone RPA Fibre Channel ports in either the initiator zone or the target zone,
but not both, there are minor differences in performance and availability.
Initiator-target separation is not supported when:
211
RecoverPoint
When using Fibre Channel ports that can be both initiators and targets, best practice is to
add all initiator and target ports to the same zone.
212
Each RPA must have at least two paths to the VPLEX cluster.
Each RecoverPoint appliance should have at least two physical connections to the
front-end fabric switch.
Each RPA should be zoned to provide paths to every virtual volume via at least two
directors.
RecoverPoint
Each director in the VPLEX cluster must have redundant I/O paths to every RPA.
Each director must have redundant physical connections to the back-end fabric
switches.
Note: Each director supports a maximum of four paths to any one RPA.
Each director must have redundant I/O paths to every back-end storage array.
Best practice is that each director has redundant physical connections to the back-end
storage fabrics.
Configuration/operation guidelines
213
RecoverPoint
Management tools
RecoverPoint can be managed using either a Command Line Interface (CLI) or Graphical
User Interface (GUI).
RecoverPoint CLI
Use the RecoverPoint CLI to manage and monitor activities interactively, or through
scripts.
There are two main modes of work in a CLI session:
CLI mode - Get help and interact with the system using CLI commands. CLI mode includes:
help mode - retrieve information regarding each CLI command, its parameters and
usage.
interactive mode - guide the user when running single commands, allowing them to
view each command parameter and its possible values while running the command.
expert mode - input multiple parameters and their values for a single command.
Script mode - Interact with the system using scripts containing CLI commands.
To open a session with a RecoverPoint cluster or RPA and communicate using the CLI,
create an SSH connection to either the site management IP (preferred, for a RecoverPoint
cluster) or IP address of a specific RPA configured during a RecoverPoint installation.
The tools to work with the RecoverPoint CLI vary depending on which mode is used to
access the CLI.
For CLI mode, download, install the free SSH connection utility PuTTY for Linux or Unix.
For Script mode, use the SSH utility that comes with the operating system, to run
scripts and commands.
For more information about the command-line interface, see the EMC RecoverPoint
Command Line Interface Reference Guide.
RecoverPoint GUI
The RecoverPoint Graphical User Interface (GUI) is a java-based Web application. The GUI
enables the user to monitor and manage a single or multiple RecoverPoint cluster
connected to each other.
To open a session with a RecoverPoint cluster or RPA:
214
RecoverPoint
Management tools
215
RecoverPoint
216
CHAPTER 13
Performance and Monitoring
This chapter describes RPO/RTO and the procedures to create and operate performance
monitors.
217
219
225
234
240
About performance
This chapter describes the following topics related to performance on VPLEX systems:
217
The maximum drain rate is the rate at which cache pages can be exchanged and
written to back-end storage, the rate at which deltas can be protected. The maximum
drain rate is a function of the inter-cluster WAN speed and backend storage-array
performance.
Default
Maximum
Configurable?
Number of clusters
Number of asynchronous
consistency groups
0 - 16
16
64
Yes
Delta size
16 MB
16 MB
No
Closeout time
30 seconds
0 (no closeout
time)
Yes
Note: Table 16 assumes the same number of engines configured in each cluster.
To calculate RPO, apply the drain rate. For example:
Assuming a drain rate of 100 MB/s:
218
Table 17 shows the impact of increasing the maximum queue depth from the default value
of 6 to the maximum value of 64 for systems with the maximum number of asynchronous
consistency groups (16) configured:
Table 17 Maximum roll back loss for Geo with 16 asynchronous consistency groups
Engines/ Total #
cluster
directors
Max queue
depth
6 (default)
4.0 GB
64
62.0 GB
8.0 GB
64
124.0 GB
16
16.0 GB
16
64
248.0 GB
2. Verify the status of the xcopy-enabled attribute by listing all attributes for all
storage-views as follows:
VPlexcli:/> '''ll /clusters/cluster-1/exports/storage-views/*'''
219
Caution: Changing the default template value of the XCOPY attribute changes the value of
the XCOPY attribute in all newly created storage-views. Consequently, this should be
done only in rare instances, usually after consultation from VPLEX Level-3 engineering.
Changing the default template value may have an adverse effect on VMWare host I/O
performance.
1. To enable XCOPY by default, set the default-xcopy-template attribute to true as
follows:
VPlexcli:/> set /clusters/*::default-xcopy-template <true|false>
220
Statistic
Description
fe-director.xcopy-avg-lat
fe-lu.xcopy-avg-lat
Description
fe-lu.xcopy-ops
fe-prt.xcopy-avg-lat
fe-prt.xcopy-ops
Current load monitoring allows administrators to watch CPU load during upgrades, I/O
load across the inter-cluster WAN link, and front-end vs. back-end load during data
mining or back up.
Current load monitoring is supported in the GUI.
Long term load monitoring collects data for capacity planning and load balancing.
Long term load monitoring is supported by monitors created in the CLI and/or
perpetual monitors.
221
You can use the CLI to create custom monitors to collect and display selected statistics for
selected targets.
See Monitor performance using the CLI on page 225.
Preconfigured monitors
Use the report create-monitors command to create three pre-configured monitors for each
director.
See Pre-configured performance monitors on page 234.
Perpetual monitors
Starting in Release 5.0, GeoSynchrony includes perpetual monitors that gather a standard
set of performance statistics every 30 seconds.
Note: Perpetual monitors do not collect per volume statistics.
Perpetual monitor files are collected as part of collect-diagnostics. Collect-diagnostics is
per cluster, so in Metro or Geo configurations, you must run the command from both
VPLEX management servers.
Output of perpetual monitors is captured in the file smsDump_<date>.zip inside the base
collect-diagnostics zip file.
Within smsDump_<date>.zip file, monitor files are in clilogs/.
You can also copy the perpetual files from the management server. They are located in
/var/log/VPlex/cli/. There is one perpetual monitor file per director, identifiable by the
keyword PERPETUAL
For example: director-1-1-A_PERPETUAL_vplex_sys_perf_mon.log
222
Performance information for the current 5-minute window is displayed as a set of charts,
including:
WAN Link Performance chart - Shows the WAN link performance for the cluster to
which you are connected. Use this chart to monitor link performance to help
determine the bandwidth requirements for your specific environment, gather
statistical data over time, monitor network traffic during peak periods, or to plan data
mobility jobs to avoid peak usage times.
WAN Link Usage chart - Shows the WAN link usage for the cluster to which you are
connected. Use this chart to monitor link usage to help determine the bandwidth
requirements for your specific environment, gather statistical data over time, monitor
network traffic during peak periods, or to plan data mobility jobs to avoid peak usage
times. The chart allows you to separately monitor the amount of bandwidth used for
normal system operations, writes to distributed volumes, and mobility jobs.
WAN Latency chart - Provides a time-based view of the WAN Latency. The categories
avg-lat/min-lat/max-lat each report values observed in the last 5 seconds or less.
WAN Message Size chart - Provides a time-based view of the WAN message sizes.
Average sent and received WAN message sizes are the average unit of data that VPLEX
communicates between directors.
WAN Packet Drops chart - Provides a time-based view of the WAN Packet Drops. WAN
Packet drops are presented as a percentage of total packets, specified by sent or
received, that were dropped by the director, or a director port.
223
224
Write Latency Delta chart - Provides the delta between Front-end latency and Back-end
Latency per director. This is a key metric for Local/Metro the amount of overhead
time VPLEX spends processing a write.
Back-end Errors chart - Displays the back-end I/O errors to and from the storage array.
There are three categories of back-end errors: Aborts, timeouts, and resets.
Back-end Throughput chart - Shows the back-end I/Os per second over time for
directors. Generally throughput (or more commonly referred to as IOPS) is associated
with small block I/O (4KB or 16KB I/O requests.)
Back-End Bandwidth chart - Shows the quantity of back-end reads and writes per
second over time for directors. Generally bandwidth (measured in KB/s or MB/s) is
associated with large block I/O (64KB or greater I/O requests).
Back-end Latency chart - Provides details of the back-end latency statistics for your
VPLEX system in graphical form over time. The chart allows you to view current or
historical performance data that you can use to monitor peaks in workload, detect
performance issues, or view what was happening in the system when a specific
problem occurred.
Rebuild Status chart - Displays the status of any rebuilds or migration operations that
are running on your VPLEX system.
CPU Utilization chart - Provides a time-based view of the utilization load on the
primary director CPU on your VPLEX system. By default, the chart shows an averaged
view of the utilization loads of all the directors in your VPLEX system.
Heap Usage chart - Shows a percentage of the heap memory used by the firmware on a
director.
Front-end Aborts chart - Displays the number of aborts per second over time for
directors on your VPLEX system. By default, the chart shows averaged front-end
aborts for the VPLEX system.
Front-End Bandwidth chart - Displays the quantity of front-end reads and writes per
second over time for directors on your VPLEX system. By default, the chart shows the
total front-end bandwidth for the VPLEX system.
Front-end Latency chart - Provides details of the front-end latency statistics for your
VPLEX system in graphical form over time. The chart allows you to view current or
historical performance data that you can use to monitor peaks in workload, detect
performance issues, or view what was happening in the system when a specific
problem occurred.
Front-end Queue Depth chart - Provides the count of front-end operations per director.
It describes the number of concurrent outstanding operations active in the system.
Front-End Throughput chart - Displays the front-end I/Os per second over time for
directors on your VPLEX system. By default, the chart shows the total front-end
throughput for the VPLEX system.
Subpage Writes chart - Displays the percentage of subpage writes over time for
directors on your VPLEX system. By default, the chart shows an averaged subpage
writes chart for the VPLEX system.
Virtual Volume Throughput chart - Provides a time-based view of the total throughput
or IOPS for a virtual volume. Generally throughput, more commonly referred to IOPS, is
associated with small block I/O (512B to 16KB I/O requests.
Virtual Volume Latency chart - Provides a time-based view of the IO Latency for a
virtual volume broken down by read and write latency. Virtual volume latency is
defined as the amount of time an I/O spends within VPLEX for a given virtual volume.
Virtual Volume Bandwidth chart - Provides a time-based view of the total bandwidth
(or KB/s or MB/s) in reads and writes for a virtual-volume. Generally bandwidth (also
referred to as KB/s or MB/s), is associated with large block I/O (64KB or greater I/O
requests)
monitors - Gather the specified statistic from the specified target at the specified
interval.
monitor sinks - Direct the output to the desired destination. Monitor sinks include the
console, a file, or a combination of the two.
Note: SNMP statistics do not require a monitor or monitor sink. Use the snmp-agent
configure command to configure and start the SNMP agent. Refer to Performance
statistics retrievable by SNMP on page 253.
225
Send output to a csv file, open the file in Microsoft Excel and create a chart.
Do NOT edit the CSV file in Microsoft Excel, and then save the file. Excel removes the
seconds field, resulting in duplicate timestamps. Use Excel to look at the CSV files,
but dont save any edits.
9. Use the monitor destroy command to remove the monitor.
Create a monitor
Use the monitor create command to create a monitor and specify the statistics collected
by the monitor.
The syntax for the command is:
monitor create
[-p|--period] collection-period
[-n|--name] monitor-name
[-d|--director] context-path,context-path
[-s|--stats] stat,stat,stat,stat
[-t|--targets] context-path,context-path
Required arguments
--name - Name of the monitor.
--stats stat,stat,stat - One or more statistics to monitor, separated by commas. Examples
of statistics (stats) include:
be aborts
be resets
be timeouts
See the Online Help for a complete list of available performance monitor statistics.
Optional arguments
-- period - Frequency at which this monitor collects statistics. Valid arguments are an
integer followed by:
ms - milliseconds (period is truncated to the nearest second)
s - seconds (Default)
min - minutes
h - hours
0 - Disables automatic polling
The default period is 30 seconds.
--director context path, context path... - Context path(s) to one or more directors to display
statistics for. Separated by commas.
--targets context path, context path... - Context path(s) to one or more targets to display
statistics for, separated by commas. Applicable only to statistics that require a target.
227
Add a file sink to send output to the specified directory on the management server.
add-console-sink
[-o|--format] {csv|table}
[-m|--monitor] monitor-name
--force
Navigate to the monitor context and use the ll console command to display the sink:
VPlexcli:/cd monitoring/directors/Director-2-1-B/monitors/Director-2-1-B_TestMonitor/sinks
VPlexcli:/monitoring/directors/Director-2-1-B/monitors/Director-2-1-B_TestMonitor/sinks> ll
Name
------console
Enabled
------true
Format
-----table
Sink-To
------console
VPlexcli:/monitoring/directors/Director-2-1-B/monitors/Director-2-1-B_TestMonitor/sinks> ll
console
/monitoring/directors/Director-2-1-B/monitors/Director-2-1-B_TestMonitor/sinks/console:
Name
Value
------- ------enabled true
Monitor performance using the CLI
229
format
sink-to
type
table
console
console
--monitor
Navigate to the monitor sinks context and use the ll sink-name command to display the
sink:
VPlexcli:/cd monitoring/directors/Director-1-1-A/monitors/director-1-1-A_stats/sinks
VPlexcli:/monitoring/directors/Director-1-1-A/monitors/director-1-1-A_stats/sinks> ll file
/monitoring/directors/Director-1-1-A/monitors/director-1-1-A_stats/sinks/file:
Name
Value
------- ------------------------------enabled true
format
csv
sink-to /var/log/VPlex/cli/director_1_1_A.csv
type
file
Delete a monitor
Use the monitor destroy monitor command to delete a specified monitor.
For example:
VPlexcli:/monitoring/directors/director-1-1-B/monitors> monitor destroy
director-1-1-B_TestMonitor
WARNING: The following items will be destroyed:
Context
---------------------------------------------------------------------------/monitoring/directors/director-1-1-B/monitors/director-1-1-B_TestMonitor
Do you wish to proceed? (Yes/No) y
230
Display monitors
Use the ls /monitoring/directors/*/monitors command to display the names of all
monitors configured on the system:
VPlexcli:/> ls /monitoring/directors/*/monitors
/monitoring/directors/director-1-1-A/monitors:
DEFAULT_director-1-1-A_PERPETUAL_vplex_sys_perf_mon_v8
director-1-1-A_Billy35_FE_A0-FC00_stats
director-1-1-A_director-fe-21112011
director-1-1-A_diskReportMonitor
.
.
.
/monitoring/directors/director-1-1-B/monitors:
DEFAULT_director-1-1-B_PERPETUAL_vplex_sys_perf_mon_v8
.
.
.
VPlexcli:/> ll /monitoring/directors/director-1-1-A/monitors
/monitoring/directors/director-1-1-A/monitors:
Name
Ownership Collecting Period
Bucket Bucket
------------------------------------- Data
-----Width
Count
------------------------------- --------- ---------- ----------- -----director-1-1-A_FE_A0-FC00
false
false
5s
director-1-1-A_director-fe
false
false
5s
director-1-1-A_ipcom-21112011
false
false
5s
director-1-1-A_portReportMon
false
false
5s
.
.
.
Average
Idle
Bucket
Bucket
Period
For
Min
Max
-------
----
------
------
64
64
64
64
VPlexcli: ll /monitoring/directors/Director-2-1-B/monitors/Director-2-1-B_volumeReportMonitor
Attributes:
Name
--------------average-period
bucket-count
bucket-max
bucket-min
bucket-width
collecting-data
firmware-id
idle-for
ownership
period
statistics
targets
Value
-------------------------------------------------------------64
true
9
5.44days
true
0s
[virtual-volume.ops, virtual-volume.read,
virtual-volume.write]
DR1_C1-C2_1gb_dev10_vol, DR1_C1-C2_1gb_dev11_vol,
DR1_C1-C2_1gb_dev12_vol, DR1_C1-C2_1gb_dev13_vol,
DR1_C1-C2_1gb_dev14_vol, DR1_C1-C2_1gb_dev15_vol,
DR1_C1-C2_1gb_dev16_vol, DR1_C1-C2_1gb_dev17_vol,
Monitor performance using the CLI
231
Contexts:
Name
Description
----- -----------------------------------------------------------------------sinks Contains all of the sinks set up to collect data from this performance
monitor.
VPlexcli: ll
/monitoring/directors/Director-2-1-B/monitors/Director-2-1-B_volumeReportMonitor/sinks
/monitoring/directors/bob70/monitors/bob70_volumeReportMonitor/sinks:
Name Enabled Format Sink-To
---- ------- ------ -------------------------------------------------------file true
csv
/var/log/VPlex/cli/reports/volumeReportMonitor_bob70.csv
Table 19 Monitor and sink field descriptions (page 1 of 2)
Field
Description
average-period
bucket-count
bucket-max
bucket-min
bucket-width
The width of the range of values that a given bucket represents. Use this to adjust the
upper bound of monitoring.
collecting-data
Whether or not this performance monitor is collecting data. A monitor collects data if it
has at least one enabled sink.
firmware-id
idle-for
The elapsed time since this performance monitor was accessed in the firmware.
name
ownership
Whether or not this monitor was created in this instance of VPlex Management Console.
period
statistics
targets
List of targets that apply to the monitored performance statistics. A target can be a port,
storage-volume, or virtual volume.
Note: Not all statistics require targets.
232
For file sinks, the name of the created sink context. Default is 'file'.
Description
Enabled
Format
Sink-To
Enable/disable/change polling
Polling (collection of the specified statistics) begins when the first sink is added to a
monitor. Polling occurs automatically at the interval specified by the monitors period
attribute.
Use the set command to change the polling period.
Use the monitor collect command to run a collection immediately, before its defined
polling interval.
Use the set command to disable, or modify automatic polling for a monitor.
In the following example:
The set command changes the period attribute to 0, disabling automatic polling
Value
-------------------------------------------------------------64
false
4
5.78min
true
0s
To re-enable polling, use the set command to change the period attribute to a non-zero
value.
Enable/disable sinks
Use the set command to enable/disable a monitor sink.
To disable a monitor sink:
VPlexcli:/monitoring/directors/director-2-1-B/monitors/director-2-1
-B_TestMonitor/sinks/console> set enabled false
233
VPlexcli:/monitoring/directors/director-2-1-B/monitors/director-2-1
-B_TestMonitor/sinks/console> ll
Name
Value
------- ------enabled false
format
table
sink-to console
type
console
director-2-1-B_TestMonitor
2010-07-01 10:05:55
Cluster_n_Dir_nn_diskReportMonitor
Cluster_n_Dir_nn_portReportMonitor
Cluster_n_Dir_nn_volumeReportMonitor
The period attribute for the new monitors is set to 0 (automatic polling is disabled).
The report poll-monitors command is used to force a poll.
Each monitor has one file sink. The file sinks are enabled.
By default, output files are located in:
234
/var/log/VPlex/cli/reports/
on the management server. Output filenames are in the following format:
<Monitor name>_Dir_nn.csv
Period
Average
Idle For
Bucket
235
Period
--------
Min
-------
--------
------
7.1min
100
6.88min
6.9min
/monitoring/directors/Cluster_1_Dir1B/monitors:
Name
Ownership Collecting Period Average Idle
Bucket
Bucket Bucket
----------------------------------- --------- Data
------ Period For
Min
Width
Count
----------------------------------- --------- ---------- ------ ------- ------------Cluster_1_Dir1B_diskReportMonitor
true
true
0s
6.88min
1600100 25000
64
Cluster_1_Dir1B_portReportMonitor
true
true
0s
6.68min 64
Cluster_1_Dir1B_volumeReportMonitor true
true
0s
6.7min 64
.
.
.
Bucket
Max
-----100
-
An empty .csv file is created on the management server for each of the monitors:
service@ManagementServer:/var/log/VPlex/cli/reports> ll
total 36
.
.
.
-rw-r--r-- 1 service users
0 2010-08-19 13:55 diskReportMonitor_Cluster_1_Dir1A.csv
-rw-r--r-- 1 service users
0 2010-08-19 13:55 diskReportMonitor_Cluster_1_Dir1B.csv
-rw-r--r-- 1 service users
0 2010-08-19 13:56 diskReportMonitor_Cluster_2_Dir_1A.csv
-rw-r--r-- 1 service users
0 2010-08-19 13:55 diskReportMonitor_Cluster_2_Dir_1B.csv
-rw-r--r-- 1 service users
0 2010-08-19 13:55 diskReportMonitor_Cluster_2_Dir_2A.csv
-rw-r--r-- 1 service users
0 2010-08-19 13:56 diskReportMonitor_Cluster_2_Dir_2B.csv
-rw-r--r-- 1 service users
5 2010-08-13 15:04 portPerformance_cluster-1.csv
-rw-r--r-- 1 service users
0 2010-08-19 13:55 portReportMonitor_Cluster_1_Dir1A.csv
-rw-r--r-- 1 service users
0 2010-08-19 13:55 portReportMonitor_Cluster_1_Dir1B.csv
-rw-r--r-- 1 service users
0 2010-08-19 13:56 portReportMonitor_Cluster_2_Dir_1A.csv
-rw-r--r-- 1 service users
0 2010-08-19 13:55 portReportMonitor_Cluster_2_Dir_1B.csv
-rw-r--r-- 1 service users
0 2010-08-19 13:55 portReportMonitor_Cluster_2_Dir_2A.csv
-rw-r--r-- 1 service users
0 2010-08-19 13:56 portReportMonitor_Cluster_2_Dir_2B.csv
-rw-r--r-- 1 service users
5 2010-08-13 15:04 volumePerformance_cluster-1.csv
-rw-r--r-- 1 service users
0 2010-08-19 13:55 volumeReportMonitor_Cluster_1_Dir1A.csv
-rw-r--r-- 1 service users
0 2010-08-19 13:55 volumeReportMonitor_Cluster_1_Dir1B.csv
-rw-r--r-- 1 service users
0 2010-08-19 13:56 volumeReportMonitor_Cluster_2_Dir_1A.csv
-rw-r--r-- 1 service users
0 2010-08-19 13:55 volumeReportMonitor_Cluster_2_Dir_1B.csv
-rw-r--r-- 1 service users
0 2010-08-19 13:55 volumeReportMonitor_Cluster_2_Dir_2A.csv
-rw-r--r-- 1 service users
0 2010-08-19 13:56 volumeReportMonitor_Cluster_2_Dir_2B.csv
These statistics apply only to consistency groups with cache mode set to asynchronous.
Because these statistics are per-consistency-group, a valid consistency-group must be
specified as a target when creating a monitor of this type.
If the target consistency-group is synchronous (either when the monitor is created or if its
cache mode is subsequently changed to synchronous), all statistics in this group will read
"no data".
If the cache-mode is changed (back) to asynchronous, the monitor will behave as
expected and the displayed values will reflect the consistency-group's performance.
Runs on the management server and fetches performance related data from individual
directors using a firmware specific interface.
Provides SNMP MIB data for directors for the local cluster only.
Runs on Port 161 of the management server and uses the UDP protocol.
VPLEX MIBs are located on the management server in the /opt/emc/VPlex/mibs directory:
service@ManagementServer:/opt/emc/VPlex/mibs> ls
VPlex.mib VPLEX-MIB.mib
237
vplex
MODULE-IDENTITY
LAST-UPDATED "201008250601Z"
-- Aug 25, 2010 6:01:00 AM
ORGANIZATION "EMC Corporation"
CONTACT-INFO
"EMC Corporation
176 South Street
Hopkinton, MA 01748 USA
Phone: 1-800-424-EMC2
Web : www.emc.com
email: [email protected]"
DESCRIPTION
"EMC VPLEX MIB Tree."
REVISION "201008250601Z"
-- Aug 25, 2010 6:01:00 AM
DESCRIPTION
"Initial version."
-- 1.3.6.1.4.1.1139.21
::= { enterprises 1139 21 }
In order to utilize SNMP, the SNMP agent must be configured and started on the VPLEX
cluster.
See Performance statistics retrievable by SNMP on page 253 for the VPLEX statistics
that can be monitored using SNMP.
238
Configure SNMP
Use the snmp-agent configure command to configure and start the SNMP agent:
VPlexcli:/> snmp-agent configure
The community string is already configured to be: private.
Choosing to continue will change the existing community string.
Do you want to continue? (yes/no)yes
What community string should the agent use? [private]: public
Use the snmp-agent status command to verify that the SNMP agent is running:
VPlexcli:/> snmp-agent status
SNMP Agent Service status is: Running
VPlexcli:/notifications/call-home/snmp-traps/myTrap>
community-string myVplex
VPlexcli:/notifications/call-home/snmp-traps/myTrap>
remote-host 1.1.1.1
VPlexcli:/notifications/call-home/snmp-traps/myTrap>
true
VPlexcli:/notifications/call-home/snmp-traps/myTrap>
Name Value
---------------- ------community-string myVplex
remote-host 1.1.1.1
remote-port 162
started true
set
set
set started
ls
Stop/start SNMP
Use the snmp-agent stop command to stop the SNMP agent without removing it from
VPLEX:
VPlexcli:/> snmp-agent stop
SNMP agent has been stopped.
239
Unconfigure SNMP
Use the snmp-agent unconfigure command to stop the SNMP agent and unconfigure it:
VPlexcli:/> snmp-agent unconfigure
SNMP agent has been unconfigured.
Statistics
VPLEX collects and reports three types of statistics:
period-average - Average of a series calculated over the last sample period. If:
current_reading_sum is the sum of all readings for the particular statistic since the
monitor's creation, and
previous_reading_sum is the count of all readings for the statistic since the monitor's
creation
period-average =
(current_reading_sum - previous_reading_sum)/
(current_reading_count - previous_reading_count)
Many statistics require a target port or volume to be specified. Output of the monitor
stat-list command identifies which statistics need a target defined, and the type of target
required when a monitor is created.
240
fe prt.ops
fe-prt.read
fe-prt.read-lat
fe-prt.write
fe-prt.write-lat
frontend port
frontend-port
frontend-port
frontend-port
frontend-port
counter
counter
bucket
counter
bucket
counts/s
KB/s
us
KB/s
us
Statistic
requires a
target
No target
required
Use the --categories categories option to display the statistics available in the specified
category. For example:
VPlexcli:/monitoring> monitor stat-list --categories director
Name
Target Type
Units
--------------------- ------ ------- -------director.be-aborts
n/a
counter counts/s
director.be-ops
n/a
counter counts/s
director.be-ops-read
n/a
counter counts/s
director.be-ops-write n/a
counter counts/s
director.be-read
n/a
counter KB/s
.
.
.
241
cg.input-ops
cg.inter-closure
cg.outOfDate-counter
cg.pipe-util
cg.write-bytes
cg.write-lat
cg.write-pages
.
.
.
consistency-group
consistency-group
consistency-group
consistency-group
consistency-group
consistency-group
consistency-group
counter
bucket
counter
reading
counter
bucket
counter
counts/s
us
counts/s
%
KB/s
us
counts/s
Statistics tables
The following tables list the statistics in each category:
242
IP WAN COM (ip-com-port) statistics on page 252 - Monitors IP ports (any port with
GE or XG in the port name).
Fibre Channel WAN COM (fc-com-port) statistics on page 253 -Monitors only those
Fibre Channel ports with role set to wan-com.
Type
Description
be-prt.read
type: counter, units: bytes/second,
arguments: port#
be-prt.write
type: counter, units: bytes/second,
arguments: port#
Type
Description
cache.dirty
type: reading, units: bytes, arguments:
none
Cache dirty
cache.miss
type: counter, units: counts/second,
arguments: none
Cache miss
cache.rhit
type: counter, units: counts/second,
arguments: none
cache.subpg
type: counter, units: counts/second,
arguments: none
Cache subpage
Type
Description
director.async-write
type: counter, units: bytes/second,
arguments: none
Asynchronous write
director.be-aborts
type: counter, units: counts/second,
arguments: none
Back-end operations
director.be-ops
type: counter, units: counts/second,
arguments: none
Back-end operations
director.be-ops-read
type: counter, units: counts/second,
arguments: none
Back-end reads
director.be-ops-write
type: counter, units: counts/second,
arguments: none
Back-end writes
Statistics tables
243
244
Statistic
Type
Description
director.be-read
type: counter, units: bytes/second,
arguments: none
Back-end reads
director.be-write
type: counter, units: bytes/second,
arguments: none
Back-end writes
director.busy
type: reading; units: percentage, arguments:
none
CPU
director.dr1-rbld-recv
type: counter, units: bytes/second,
arguments: none
director.dr1-rbld-sent
type: counter, units: bytes/seconds,
arguments: none
director.fe-ops
type: counter, units: counts/second,
arguments: none
Front-end operations
director.fe-ops-act
Front-end operations
type: reading, units: counts, arguments: none active
director.fe-ops-q
Front-end operations
type: reading, units: counts, arguments: none queued
director.fe-ops-read
type: counter, units: counts/second,
arguments: none
Front-end reads
director.fe-ops-write
type: counter, units: counts/second
arguments: none
Front-end writes
director.fe-read
type: counter, units: bytes/second,
arguments: none
Front-end reads
director.fe-write
type: counter, units: bytes/second,
arguments: none
Front-end writes
director.heap-used
type: reading; units: percentage, arguments:
none
Memory
director.per-cpu-busy
type: reading, units: percentage, arguments:
none
CPU busy
director.tcp-recv
type: counter, units: bytes/second,
arguments: none
Type
Description
director.tcp-send
type: counter, units: bytes/second,
arguments: none
director.udt-conn-drop
type: counter, units: counts/second,
arguments: none
UDT connections
dropped
director.udt-pckt-retrans
type: counter, units: counts/second,
arguments: none
Packets resent
director.udt-recv-bytes
type: counter, units: bytes/second,
arguments: none
director.udt-recv-drops
type: counter, units: counts/second,
arguments: none
director.udt-recv-pckts
type: counter, units: counts/second,
arguments: none
director.udt-send-bytes
type: counter, units: bytes/second,
arguments: none
director.udt-send-drops
type: counter, units: count/second,
arguments: none
director.udt-send-pckts
type: counter, units: count/second,
arguments: none
Type
Description
directory.ch-remote
Cache coherence
directory.chk-total
Cache coherence
directory.dir-total
Cache coherence
directory.dr-remote
Cache coherence
directory.ops-local
Cache coherence
directory.ops-rem
Cache coherence
Statistics tables
245
Type
Description
fe-director.aborts
type: counter, units: counts/second,
arguments: none
Front-end operations
fe-director.caw-lat
type: bucket, units: microsecond,
arguments: none
CompareAndWrite
operations latency
CompareAndWrite latency in
microseconds on the specified directors
front-end ports. The latency bucket is
reduced to three buckets from 0 to
maximum instead of 64 latency buckets
collected within the VPLEX firmware.
fe-director.read-lat
type: bucket, units: microsecond,
arguments: none
fe-director.write-lat
type: bucket, units: microsecond,
arguments: none
246
Statistic
Type
Description
fe-lu.caw-lat
type: bucket, units: microsecond,
arguments:volume-id
CompareAndWrite
operations latency
CompareAndWrite latency in
microseconds on the specified front-end
volume.
fe-lu.caw-mis
type: counter, units: counts/second,
arguments: volume-id
CompareAndWrite
miscompares
Number of CompareAndWrite
miscompares on the specified front-end
volume.
fe-lu.caw-ops
type: counter, units: counts/second,
arguments: volume-id
CompareAndWrite
operations
fe-lu.ops
type: counter, units: counts/second,
arguments: volume-id
Front-end volume
operations
fe-lu.read
type: counter, units: bytes/second,
arguments: volume-id
Front-end volume
reads
fe-lu.read-lat
type: bucket, units: microsecond,
arguments: volume-id
fe-lu.write
type: counter, units: bytes/second,
arguments: volume-id
Front-end volume
writes
fe-lu.write-lat
type: bucket, units: microsecond,
arguments: volume-id
Type
Description
fe-prt.caw-lat
type: bucket, units: microsecond,
arguments:port#
CompareAndWrite
operations latency
CompareAndWrite latency in
microseconds on the specified front-end
port.
fe-prt.caw-mis
type: counter, units: counts/sec,
arguments: port#
CompareAndWrite
miscompares
Number of CompareAndWrite
miscompares on the specified front-end
port.
fe-prt.caw-ops
type: counter, units: counts/sec,
arguments: port#
CompareAndWrite
operations
fe-prt.ops
type: counter, units: counts/sec,
arguments: port#
Front-end port
operations
fe-prt.read
type: counter, units: bytes/sec, arguments:
port#
fe-prt.read-lat
type: bucket, units: microsecond,
arguments: port#
fe-prt.write
type: counter, units: bytes/second,
arguments: port#
fe-prt.write-lat
type: bucket, units: microsecond,
arguments: port#
Type
ramf.cur-op
Current op count
type: reading, units: counts, arguments: none
Description
Instantaneous count of remote RAID
operations.
ramf.exp-op
type: counter, units: counts/second,
arguments: none
Remote operations
ramf.exp-rd
type: counter, units: bytes/second,
arguments: none
Remote reads
ramf.exp-wr
type: counter, units: bytes/second,
arguments: none
Remote writes
Statistics tables
247
Type
Description
ramf.imp-op
type: counter, units: counts/second,
arguments: none
Imported ops
ramf.imp-rd
type: counter, units: bytes/second,
arguments: none
Imported reads
ramf.imp-wr
type: counter, units: bytes/second,
arguments: none
Imported writes
Type
Description
rdma.cur-ops
RDMA ops
type: reading, units: counts, arguments: none
rdma.read
type: counter, units: bytes/second,
arguments: none
RDMA reads
rdma.write
type: counter, units: bytes/second,
arguments: none
RDMA writes
248
Statistic
Type
Description
storage-volume.per-storage-volume-read-late
ncy
type: bucket, units: microsecond, arguments:
volume-id
storage-volume.read-latency
type: bucket, units: microsecond, arguments:
none
storage-volume.write-latency
type: bucket, units: microsecond, arguments:
none
Type
Description
virtual-volume.dirty
type: reading, units: counts, arguments:
volume-id
Volume dirty
virtual-volume.ops
type: counter, units: counts/second,
arguments: volume-id
Volume operations
virtual-volume.read
type: counter, units: bytes/second,
arguments: volume-id
Volume reads
virtual-volume.write
type: counter, units: bytes/second,
arguments: volume-id
Volume writes
Description
wrt-pacing.avg-delay
type: reading, units: millisecond,
arguments: consistency-group
The average delay incurred on the throttled I/Os since the last iostat report.
wrt-pacing.avg-pdrain
type: reading, units:
bytes/second, arguments:
consistency-group
Average drain rate of the consistency group pipe since the last iostat
report.
wrt-pacing.avg-pinput
type: reading, units:
bytes/second, arguments:
consistency-group
Average input rate of the consistency group pipe since the last iostat
report.
wrt-pacing.is-throttling
type: reading, units: counts,
arguments: consistency-group
Has any I/O for this consistency group been throttled (delayed) since the
last iostat report?
1- True. I/O has been throttled at least once since the last iostat report.
0 - False. I/O has been never throttled.
wrt-pacing.peak-putil
type: reading, units: percent,
arguments: consistency-group
Statistics tables
249
250
Statistic
Description
cg.closure
type: bucket, units: microsecond,
arguments: consistency-group
cg.delta-util
type: reading, units: percentage,
arguments: consistency-group
The average capacity utilization of closed deltas (how full the deltas were
when a closure was requested) over the last sampling period.
cg.drain-lat
type: bucket, units: microsecond,
arguments: consistency-group
cg.exch-bytes
type: counter, units: bytes/second,
arguments: consistency-group
The number of bytes received from other cluster(s) at this director over the
last sampling period.
cg.exch-lat
type: bucket, units: microsecond,
arguments: consistency-group
cg.exch-pages
type: counter, units: counts/second,
arguments: consistency-group
The number of pages received from other cluster(s) at this director over
the last sampling period.
cg.input-bytes
type: counter, units: bytes/second,
arguments: consistency-group
cg.iput-ops
type: counter, units: counts/second,
arguments: consistency-group
cg.inter-closure
type: bucket, units: microsecond,
arguments: consistency-group
cg.outOfDate-counter
type: counter, units: counts/second,
arguments: consistency-group
The number of writes over the last sampling period for which an
underlying DR1 leg was out of date.
cg.pipe-util
type: reading, units: percentage,
arguments: consistency-group
cg.write-bytes
type: counter, units: bytes/second,
arguments: consistency-group
The number of bytes written to the back end at this director over the last
sampling period.
cg.write-lat
type: bucket, units:microsecond,
arguments: consistency-group
cg.write-pages
type: counter, units: counts/second,
arguments: consistency-group
The number of pages written to the back end at this director over the last
sampling period.
Description
rp-spl-vol.write-active
type: reading, units: counts,
arguments: vol-id
rp-spl-vol.-write-ops
type: counter, units: counts/second,
arguments: vol-id
Number of writes that have been processed by the splitter for a specific
volume.
rp-spl-vol.-write
type: counter, units: bytes/second,
arguments: vol-id
The quantity of write data that has been split for a specific volume.
rp-spl-vol.write-lat
type: bucket, units: microsecond,
arguments: vol-id
Latency from when a write is sent to the splitter to when it is sent down
the normal BE I/O stack. Measures SCSI round-trip time to get the write
data into the memory of the RPA.
Description
rp-spl-node.write-active
type: reading, units: counts,
arguments: none
rp-spl-node.write-ops
type: counter, units: counts/second,
arguments: none
rp-spl-node.write
type: counter, units: bytes/second,
arguments: none
rp-spl-node.write-lat
type: bucket, units: microsecond,
arguments: none
Latency from when a write is sent to the splitter to when it is sent down
the normal BE I/O stack. Measures SCSI round-trip time to get the write
data into the memory of the RPA..
Statistics tables
251
Type
Description
ip-com-port.avg-latency
type: reading, units: microsecond,
arguments: port-name
252
ip-com-port.max-latency
type: reading, units: microsecond,
arguments: port-name
ip-com-port.min-latency
type: reading, units: microsecond,
arguments: port-name
ip-com-port.pckt-retrans
type: counter, units: counts/second,
arguments: port-name
ip-com-port.recv-bytes
type: counter, units: bytes/second,
arguments: port-name
ip-com-port.recv-drops
type: counter, units: counts/second,
arguments: port-name
ip-com-port.recv-pckts
type: counter, units: counts/second,
arguments: port-name
ip-com-port.send-bytes
type: counter, units: bytes/second,
arguments: port-name
ip-com-port.send-drops
type: counter, units: counts/second,
arguments: port-name
ip-com-port.send-pckts
type: counter, units: counts/second,
arguments: port-name
Type
Description
fc-com-port.recv-bytes
type: counter, units: bytes/second, arguments:
fibrechannel-port
FC-COM bytes
received
fc-com-port.recv-pckts
type: counter, units: counts/second, arguments:
fibrechannel-port
FC-COM packets
received
fc-com-port.send-bytes
type: counter, units: bytes/second, arguments:
fibrechannel-port
fc-com-port.send-pckts
type: counter, units: counts/second, arguments:
fibrechannel-port
FC-COM packets
send
Description
com-cluster-io.avg-lat
type: reading, units: microseconds,
arguments: cluster-id
com-cluster-io.max-lat
type:reading, units: microseconds,
arguments: cluster-id
com-cluster-io.min-lat
reading, units: microseconds, arguments:
cluster-id
com-cluster-io.send-ops
type:reading, units: none, arguments:
cluster-id
Table
Description
vplexDirectorCpuIdle
vplexDirectorProcTabl
e
vplexDirectorHeapUsed
vplexDirectorMemTab
le
Statistics tables
253
vplexDirectorFETable
vplexDirectorFEOpsWrite
vplexDirectorFETable
vplexDirectorFEOpsQueued
vplexDirectorFETable
vplexDirectorFEOpsActive
vplexDirectorFETable
vplexDirectorFEOpsAvgReadLaten
cy
vplexDirectorFETable
vplexDirectorFEOpsAvgWriteLaten vplexDirectorFETable
cy
vplexDirectorBEOpsAvgReadLaten vplexDirectorBETable
cy
vplexDirectorBEOpsAvgWriteLate
ncy
vplexDirectorFETable
vplexDirectorFEBytesRead
vplexDirectorFETable
vplexDirectorFEBytesWrite
vplexDirectorFETable
vplexDirectorBEOpsRead
vplexDirectorBETable
vplexDirectorBEOpsWrite
vplexDirectorBETable
vplexDirectorBEBytesRead
vplexDirectorBETable
vplexDirectorBEBytesWrite
vplexDirectorBETable
254
Table
Description
vplexDirectorFEPortBytesRead
vplexDirectorFEPortTable
vplexDirectorFEPortBytesWrite
vplexDirectorFEPortTable
vplexDirectorBEPortBytesRead
vplexDirectorBEPortBytesWrite
Table
Description
vplexDirectorVirtualVolumeName
vplexDirectorVirtualVolumeTable
vplexDirectorVirtualVolumeUuid
vplexDirectorVirtualVolumeTable
vplexDirectorVirtualVolumeOps
vplexDirectorVirtualVolumeTable
vplexDirectorVirtualVolumeRead
vplexDirectorVirtualVolumeTable
vplexDirectorVirtualVolumeWrite
vplexDirectorVirtualVolumeTable
vplexDirectorVirtualVolumeReadAvgLatency
vplexDirectorVirtualVolumeTable
vplexDirectorVirtualVolumeWriteAvgLatency vplexDirectorVirtualVolumeTable
Statistics tables
255
256
GLOSSARY
This glossary contains terms related to VPLEX federated storage systems. Many of these
terms are used in these manual.
A
AccessAnywhere
active/active
Active Directory
active mirror
active/passive
array
asynchronous
B
bandwidth
backend port
bias
bit
257
Glossary
block
block size
byte
The smallest amount of data that can be transferred following SCSI standards, which
is traditionally 512 bytes. Virtual volumes are presented to users as a contiguous lists
of blocks.
The actual size of a block on a device.
Memory space used to store eight bits of data.
C
cache
Temporary storage for recent writes and recently accessed data. Disk data is read
through the cache so that subsequent read references are found in the cache.
cache coherency
Managing the cache so data is not lost, corrupted, or overwritten. With multiple
processors, data blocks may have several copies, one in the main memory and one in
each of the cache memories. Cache coherency propagates the blocks of multiple users
throughout the system in a timely fashion, ensuring the data blocks do not have
inconsistent versions in the different processors caches.
cluster
Two or more VPLEX directors forming a single fault-tolerant cluster, deployed as one
to four engines.
cluster ID
cluster deployment ID
cluster IP seed
clustering
Using two or more computers to function together as a single entity. Benefits include
fault tolerance and load balancing, which increases reliability and up time.
COM
The intra-cluster communication (Fibre Channel). The communication used for cache
coherency and replication traffic.
consistency group
A VPLEX structure that groups together virtual volumes and applies the same detach
and failover rules to all member volumes. Consistency groups ensure the common
application of a set of properties to the entire group. Create consistency groups for
sets of volumes that require the same I/O behavior in the event of a link failure. There
are two types of consistency groups:
258
Glossary
continuity of operations
(COOP)
controller
D
data sharing
The ability to share access to the same data with multiple servers regardless of time
and location.
detach rule
Predefined rules that determine which cluster continues I/O when connectivity
between clusters is lost. A cluster loses connectivity to its peer cluster due to cluster
partition or cluster failure.
Detach rules are applied at two levels; to individual volumes, and to consistency
groups. If a volume is a member of a consistency group, the group detach rule
overrides the rule set for the individual volumes. Note that all detach rules may be
overridden by VPLEX Witness, if VPLEX Witness is deployed.
device
director
dirty data
disaster recovery (DR)
discovered array
disk cache
distributed device
A combination of one or more extents to which you add specific RAID properties.
Local devices use storage from only one cluster. In VPLEX Metro and Geo
configurations, distributed devices use storage from both clusters.
A CPU module that runs GeoSynchrony, the core VPLEX software. There are two
directors (A and B) in each engine, and each has dedicated resources and is capable of
functioning independently.
The write-specific data stored in the cache memory that has yet to be written to disk.
The ability to restart system operations after an error, preventing data loss.
An array that is connected to the SAN and discovered by VPLEX.
A section of RAM that provides cache between the disk and the CPU. RAMs access
time is significantly faster than disk access time; therefore, a disk-caching program
enables the computer to operate faster by placing recently accessed data in the disk
cache.
A RAID 1 device whose mirrors are in different VPLEX clusters.
Supports the sharing of files and resources in the form of persistent storage over a
network.
Glossary
E
engine
Consists of two directors, management modules, and redundant power. Unit of scale
for VPLEX configurations. Single = 1 engine, dual = 2 engines, Quad = 4 engines per
cluster.
Ethernet
A Local Area Network (LAN) protocol. Ethernet uses a bus topology, meaning all
devices are connected to a central cable, and supports data transfer rates of between
10 megabits per second and 10 gigabits per second. For example, 100 Base-T supports
data transfer rates of 100 Mb/s.
event
extent
A log message that results from a significant action initiated by a user or the system.
All or a portion (range of blocks) of a storage volume.
F
failover
fault domain
A set of components that share a single point of failure. For VPLEX, the concept that
every component of a Highly Available system is separated, so that if a fault occurs in
one domain, it will not result in failure in other domains to which it is connected.
fault tolerance
firmware
front end port
Software that is loaded on and runs from the flash ROM on the VPLEX directors.
VPLEX director port connected to host initiators (acts as a target).
G
geographically
distributed system
gigabit Ethernet
gigabyte (GB)
260
The version of Ethernet that supports data transfer rates of 1 Gigabit per second.
1,073,741,824 (2^30) bytes. Often rounded to 10^9.
Glossary
H
hold provisioning
An attribute of a registered array that allows you to set the array as unavailable for
further provisioning of new storage.
An I/O adapter that manages the transfer of information between the host computers
bus and memory system. The adapter performs many low-level interface functions
automatically or with minimal processor involvement to minimize the impact on the
host processors performance.
I
input/output (I/O)
internet Fibre Channel
protocol (iFCP)
intranet
K
kilobit (Kb)
kilobyte (K or KB)
L
latency
LDAP
load balancing
local device
A combination of one or more extents to which you add specific RAID properties.
Local devices use storage from only one cluster.
Glossary
Virtual storage to which a given server with a physical connection to the underlying
storage device may be granted or denied access. LUNs are used to identify SCSI
devices, such as external hard drives, connected to a computer. Each device is
assigned a LUN number which serves as the device's unique address.
M
megabit (Mb)
megabyte (MB)
metadata
metavolume
Metro-Plex
mirroring
mirroring services
miss
N
namespace
A set of names recognized by a file system in which all names are unique.
network
network architecture
network-attached
storage (NAS)
network partition
O
Open LDAP
P
parity checking
262
Checking for errors in binary data. Depending on whether the byte has an even or
odd number of bits, an extra 0 or 1 bit, called a parity bit, is added to each byte in a
transmission. The sender and receiver agree on odd parity, even parity, or no parity. If
Glossary
they agree on even parity, a parity bit is added that makes each byte even. If they
agree on odd parity, a parity bit is added that makes each byte odd. If the data is
transmitted incorrectly, the change in parity will reveal the error.
partition
A subdivision of a physical or virtual disk, which is a logical entity only visible to the
end user, not any of the devices.
R
RAID
The use of two or more storage volumes to provide better performance, error
recovery, and fault tolerance.
RAID 0
RAID 1
Also called mirroring, this has been used longer than any other form of RAID. It
remains popular because of simplicity and a high level of data availability. A mirrored
array consists of two or more disks. Each disk in a mirrored array holds an identical
image of the user data. RAID 1 has no striping. Read performance is improved since
either disk can be read at the same time. Write performance is lower than single disk
storage. Writes must be performed on all disks, or mirrors, in the RAID 1. RAID 1
provides very good data reliability for read-intensive applications.
RAID leg
rebuild
RecoverPoint Appliance
(RPA)
RecoverPoint cluster
RecoverPoint site
redundancy
registered array
An array that is registered with VPLEX. Registration is required to make the array
available for services-based provisioning. Registration includes connecting to and
creating awareness of the arrays intelligent features. Only VMAX and VNX arrays
can be registered.
reliability
Glossary
Allows computers within a network to exchange data using their main memories and
without using the processor, cache, or operating system of either computer.
Replication set
When RecoverPoint is deployed, a production source volume and one or more replica
volume(s) to which it replicates.
restore source
This operation restores the source consistency group from data on the copy target.
RPO
Recovery Point Objective. The time interval between the point of failure of a storage
system and the expected point in the past, to which the storage system is capable of
recovering customer data. Informally, RPO is a maximum amount of data loss that can
be tolerated by the application after a failure. The value of the RPO is highly
dependent upon the recovery technique used. For example, RPO for backups is
typically days; for asynchronous replication minutes; and for mirroring or
synchronous replication seconds or instantaneous.
RTO
Recovery Time Objective. Not to be confused with RPO, RTO is the time duration
within which a storage solution is expected to recover from failure and begin
servicing application requests. Informally, RTO is the longest tolerable application
outage due to a failure of a storage system. RTO is a function of the storage
technology. It may measure in hours for backup systems, minutes for a remote
replication, and seconds (or less) for a mirroring.
S
scalability
services-based
provisioning
simple network
management protocol
(SNMP)
site ID
SLES
SUSE Linux Enterprise Server is a Linux distribution supplied by SUSE and targeted
at the business market.
A set of evolving ANSI standard electronic interfaces that allow personal computers
to communicate faster and more flexibly than previous interfaces with peripheral
hardware such as disk drives, tape drives, CD-ROM drives, printers, and scanners.
splitter
storage area network
(SAN)
264
Glossary
storage view
storage volume
stripe depth
The number of blocks of data stored contiguously on each storage volume in a RAID 0
device.
striping
A technique for spreading data over multiple disk drives. Disk striping can speed up
operations that retrieve data from disk storage. Data is divided into units and
distributed across the available disks. RAID 0 provides disk striping.
synchronous
Describes objects or events that are coordinated in time. A process is initiated and
must be completed before another task is allowed to begin.
For example, in banking two withdrawals from a checking account that are started at
the same time must not overlap; therefore, they are processed synchronously. See also
asynchronous.
T
throughput
3. A measure of the amount of work performed by a system over a period of time. For
example, the number of I/Os per day.
tool command language
(TCL)
transfer size
A scripting language often used for rapid prototypes and scripted applications.
The size of the region in cache used to service data migration. The area is globally
locked, read at the source, and written at the target. Transfer-size can be as small as 40
K, as large as 128 M, and must be a multiple of 4 K. The default value is 128 K.
A larger transfer-size results in higher performance for the migration, but may
negatively impact front-end I/O. This is especially true for VPLEX Metro migrations.
Set a large transfer-size for migrations when the priority is data protection or
migration performance.
A smaller transfer-size results in lower performance for the migration, but creates less
impact on front-end I/O and response times for hosts. Set a smaller transfer-size for
migrations when the priority is front-end storage response time.
transmission control
protocol/Internet
protocol (TCP/IP)
The basic communication language or protocol used for traffic on a private network
and the Internet.
U
uninterruptible power
supply (UPS)
A power supply that includes a battery to maintain power in the event of a power
failure.
Glossary
universal unique
identifier (UUID)
A 64-bit number used to uniquely identify each VPLEX director. This number is based
on the hardware serial number assigned to each director.
V
virtualization
virtual volume
Unit of storage presented by the VPLEX front end ports to hosts. A virtual volume
looks like a contiguous volume, but can be distributed over two or more storage
volumes.
W
wide area network (WAN)
A specific Fibre Channel Name Identifier that is unique worldwide and represented
by a 64-bit unsigned binary value.
write-back mode
write-through mode
266
INDEX
A
About
meta-volumes 15
transfer-size 107
adding
to a rule set 48
addvirtualvolume, export view 62
attaching
rule sets, 50
rule sets, local RAIDs 51
auto mirror isolation 69
automatic rebuilds 66
B
batch migrations 104
cancel 108
check migration plan 106
clean 110
create migration plan 105
modify batch migration plan 106
monitor 108
pause/resume 108
prerequisites 105
remove 111
start 107
status 109
transfer-size 107
battery conditioning 27
typical monthly calendar 27
when to stop a cycle 28
C
cache vaulting
about 197
recovery after vault 202
successful recovery 204
unsuccessful recovery 204
vaulting process 201
call-home notifications
about 30
customized call-home events 30
event severity 30
CAW
CompareAndWrite 34
display storage view setting 35
display system default setting 35
enable/disable as default 36
enable/disable for storage view 36
statistics 36
CLI
setting logging threshold 10
CLI workspace
console logging 10
D
data
migration, batching 104
migration, multiple RAIDs 104
Data loss failure mode 179
data migrations
about 97
batch migrations 97
general steps 98
one time migrations 97
EMC VPLEX Administration Guide
267
Index
prerequisites 98
disallowing RAID rebuilds 66
Display
consistency-groups
advanced properties 158
names 156
operational status 158
monitors 231
distributed devices 54
add local mirror 63
create from exported volume 65
create virtual volume 60
enable/disable auto-resume 67
enable/disable device rebuilds 66
how to create 54
remove local mirror 64
resume I/O conflicting detach 69
resume I/O during outage 68
E
export
view addvirtualvolume 62
F
file rotation 225
L
link failure recovery
rule sets, adding 48
logging threshold, setting 10
logging volumes
add a mirror 44
create 43
delete 44
size 43
M
meta-volumes
about 15
backup
VPLEX Local 18
VPLEX Metro or Geo 19
change name 22
create 16
delete 22
display 23
display fields 24
move 21
performance/availability requirements 16
migrating data
multiple RAIDs 104
Mirror isolation 72
Mirror un-isolation 73
monitor sink
delate 230
monitor sinks 229
268
O
one-time migration
cancel 103
clean 104
commit 103
monitor 101
pause/resume 102
remove 104
start 100
option-sets context 117
P
perfomance monitor
create 225
Perfomance monitoring
Examples
10 seconds, directors 228
all statistics for all volumes 228
default period, no targets 228
fe statistics for a specified fe port 228
Local COM latency 228
port-level WAN statistics 228
Remote cluster latency 228
Send CAW statistics to the management server 229
udt port statistics for a specified director 228
Performance monitoring
about 221
add console sink 229
add file sink 230
add sinks 229
configure SNMP 239
consistency groups 237
create a monitor using the CLI 226
create monitor 227
custom monitors 222
delete monitor sink 230
display monitors 231
display statistics 241
file rotation 225
force an immediate poll 234
manage sinks 233
perpetual monitors 222
polling 233
preconfigured monitors 222
procedure 226
SNMP 237
statistics 240
using the VPLEX CLI 225
VPLEX GUI 223
Perpetual monitors 222
port groups 113
port-group
change MTU size 122
port-groups context 116
R
RAIDs
attaching rule sets 51
automatic rebuilds, allowing/disallowing 66
IndexIndex
S
setting
logging threshold for the CLI 10
SNMP 253
start snmp agent 253
statistics 241, 253
back-end fibre channel port 232, 243
cache 243
consistency group (wof-throttle) 249, 250, 251
director 243
directory 245
Fibre Channel WAN COM (fc-com-port) 253
front-end director 246
front-end LU 246
front-end port 247
IP WAN COM (ip-com-port) 252
remote data memory access 248
T
Thin provisioning 99
U
User accounts
add 11
change password 12
delete 12
reset 13
V
verification
rule set configurations 52
view
addvirtualvolume 62
virtual volume
create 60
export to hosts 61
export to remote host 62
Volume Expansion
Determining volume expansion-method 87
Using CLI 88
Using GUI 88
Expanding virtual volumes
Concatenation expansion method 95
Storage-volume expansion method 89
Limitations 87
Overview 87
Volume expansion
Expanding virtual volumes 88
volume-set add-virtual-volumes 145
VPLEX Witness
deploment 171
display status
cluster isolation 193
cluster link restored 195
cluster-1 failure 191
cluster-2 failure 192
dual failure with DU 192
inter-cluster link failure 190
enable/disable 182
failures 175
Metro Systems without Witness 173
VPLEX Geo
with VPLEX Witness 180
177
269
Index
W
WAN ports 113
change a port group configuration 119
change a ports IP address 122
change port-groups MTU 122
CLI contexts 114
Geo configuration rules 113
option-sets context 117
port-groups context 116
subnets context 115
WriteSame
Display setting 37
display storage view setting 37
Enable/disable as system default 38
Enable/disable for a storage view 38
Enabling/disabling 37
statistics 39
WriteSame (16)
display default setting 38
270