Dell EMC PowerMax Family Product Guide
Dell EMC PowerMax Family Product Guide
PowerMaxOS
May 2021
Rev. 10
Notes, cautions, and warnings
NOTE: A NOTE indicates important information that helps you make better use of your product.
CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid
the problem.
WARNING: A WARNING indicates a potential for property damage, personal injury, or death.
© 2018 - 2021 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Other trademarks may be trademarks of their respective owners.
Contents
Figures..........................................................................................................................................8
Tables........................................................................................................................................... 9
Preface....................................................................................................................................................................................... 10
Contents 3
PowerMaxOS support for open systems.....................................................................................................................44
PowerPath.......................................................................................................................................................................... 44
Operational overview..................................................................................................................................................44
Host registration..........................................................................................................................................................45
Device status................................................................................................................................................................45
Automatic creation of Initiator Groups.................................................................................................................. 45
Management.................................................................................................................................................................45
More information......................................................................................................................................................... 46
Backup and restore using PowerProtect Storage Direct and Data Domain....................................................... 46
Backup........................................................................................................................................................................... 46
Restore...........................................................................................................................................................................47
Storage Direct agents................................................................................................................................................ 47
Features used for Storage Direct backup and restore.......................................................................................48
Storage Direct and traditional backup....................................................................................................................48
More information......................................................................................................................................................... 48
VMware Virtual Volumes................................................................................................................................................. 48
vVol components.........................................................................................................................................................49
vVol scalability..............................................................................................................................................................49
vVol workflow...............................................................................................................................................................49
Chapter 5: Provisioning...............................................................................................................56
Thin provisioning............................................................................................................................................................... 56
Pre-configuration for thin provisioning.................................................................................................................. 57
Thin devices (TDEVs).................................................................................................................................................57
Thin device oversubscription....................................................................................................................................58
Internal memory usage...............................................................................................................................................58
Open Systems-specific provisioning.......................................................................................................................58
Multi-array provisioning................................................................................................................................................... 60
4 Contents
Chapter 7: Automated data placement.........................................................................................66
Environment....................................................................................................................................................................... 66
Operation.............................................................................................................................................................................66
Service level biasing......................................................................................................................................................... 66
Compression and deduplication..................................................................................................................................... 66
Availability........................................................................................................................................................................... 66
Contents 5
SRDF/AR 3-site configurations............................................................................................................................... 95
TimeFinder and SRDF/A................................................................................................................................................. 95
TimeFinder and SRDF/S..................................................................................................................................................96
6 Contents
eLicensing.......................................................................................................................................................................... 126
Capacity measurements........................................................................................................................................... 127
Open systems licenses................................................................................................................................................... 128
License packages....................................................................................................................................................... 128
Individual licenses.......................................................................................................................................................130
Ecosystem licenses................................................................................................................................................... 130
PowerMax Mainframe software packaging options................................................................................................130
Index..........................................................................................................................................132
Contents 7
Figures
1 D@RE architecture..................................................................................................................................................28
2 EEEE components................................................................................................................................................... 30
3 Inline compression and over-subscription..........................................................................................................33
4 Data flow during a backup operation to Data Domain.................................................................................... 47
5 Two-site Global Mirror........................................................................................................................................... 52
6 TCT environment with PowerMax and DLm.....................................................................................................54
7 Auto-provisioning groups.......................................................................................................................................59
8 SnapVX targetless snapshots...............................................................................................................................69
9 SnapVX cascaded snapshots................................................................................................................................ 70
10 zDP operation............................................................................................................................................................ 71
11 R1 and R2 devices....................................................................................................................................................78
12 R11 device in concurrent SRDF.............................................................................................................................79
13 R21 device in cascaded SRDF.............................................................................................................................. 80
14 R22 devices in cascaded and concurrent SRDF/Star.................................................................................... 80
15 Migrating data and removing a secondary (R2) array.................................................................................... 84
16 SRDF/Metro............................................................................................................................................................. 85
17 SRDF/Metro Smart DR.......................................................................................................................................... 87
18 Disaster recovery for SRDF/Metro.....................................................................................................................88
19 Cloud Mobility for PowerMax - high level architecture.................................................................................. 91
20 SRDF/AR 2-site solution........................................................................................................................................94
21 SRDF/AR 3-site solution....................................................................................................................................... 95
22 Configuration of a VMAX3, VMAX All Flash or PowerMax migration.........................................................99
23 Configuration of a VMAX migration................................................................................................................... 101
24 Open Replicator hot (or live) pull.......................................................................................................................106
25 Open Replicator cold (or point-in-time) pull................................................................................................... 106
26 z/OS volume migration.........................................................................................................................................108
27 z/OS Migrator dataset migration.......................................................................................................................108
28 Expand Volume dialog in Unisphere....................................................................................................................113
29 z/OS IEA480E acute alert error message format (call home failure)....................................................... 124
30 z/OS IEA480E service alert error message format (Disk Adapter failure)..............................................125
31 z/OS IEA480E service alert error message format (SRDF Group lost/SIM presented against
unrelated resource)............................................................................................................................................... 125
32 z/OS IEA480E service alert error message format (mirror-2 resynchronization).................................125
33 z/OS IEA480E service alert error message format (mirror-1 resynchronization)..................................125
34 eLicensing process.................................................................................................................................................126
8 Figures
Tables
Tables 9
Preface
As part of an effort to improve its product lines, Dell EMC periodically releases revisions of its software and hardware. Functions
that are described in this document may not be supported by all versions of the software or hardware. The product release
notes provide the most up-to-date information about product features.
Contact your Dell EMC representative if a product does not function properly or does not function as described in this
document.
NOTE: This document was accurate at publication time. New versions of this document might be released on Dell EMC
Online Support (https://www.dell.com/support/home). Check to ensure that you are using the latest version of this
document.
Purpose
This document introduces the features of the Dell EMC PowerMax arrays running PowerMaxOS 5978. The descriptions of the
software capabilities also apply to VMAX All Flash arrays running PowerMaxOS 5978, except where noted.
Audience
This document is intended for use by customers and Dell EMC representatives.
Related documentation
The following documentation portfolios contain documents related to the hardware platform and manuals needed to manage
your software and storage system configuration. Also listed are documents for external components that interact with the
PowerMax array.
Hardware platform documents:
Dell EMC Provides planning information regarding the purchase and installation of a PowerMax 2000, 8000 with
PowerMax Family PowerMaxOS.
Site Planning
Guide
Dell EMC Best Describes the best practices to assure fault-tolerant power to a PowerMax 2000 or PowerMax 8000
Practices Guide array.
for AC Power
Connections for
PowerMax 2000,
8000 with
PowerMaxOS
Dell EMC Shows how to securely deploy PowerMax arrays running PowerMaxOS.
PowerMax
Family Security
Configuration
Guide
Unisphere documents:
Dell EMC Describes new features and any known limitations for Unisphere for PowerMax.
Unisphere for
PowerMax
Release Notes
Dell EMC Provides installation instructions for Unisphere for PowerMax.
Unisphere for
10 Preface
PowerMax
Installation Guide
Dell EMC Describes the Unisphere for PowerMax concepts and functions.
Unisphere for
PowerMax Online
Help
Dell EMC Describes the Unisphere for PowerMax REST API concepts and functions.
Unisphere for
PowerMax REST
API Concepts and
Programmer's
Guide
Dell EMC Describes new features and any known limitations for Unisphere 360.
Unisphere 360
Release Notes
Dell EMC Provides installation instructions for Unisphere 360.
Unisphere 360
Installation Guide
Dell EMC Describes the Unisphere 360 concepts and functions.
Unisphere 360
Online Help
Preface 11
Dell EMC Describes the applicable pair states for various SRDF operations.
Solutions Enabler
SRDF Family
State Tables
Guide
SRDF Interfamily Defines the versions of PowerMaxOS, HYPERMAX OS and Enginuity that can make up valid SRDF
Connectivity replication and SRDF/Metro configurations, and can participate in Non-Disruptive Migration (NDM).
Information
Dell EMC SRDF Provides an overview of SRDF, its uses, configurations, and terminology.
Introduction
Dell EMC Describes how to configure and manage TimeFinder SnapVX environments using SYMCLI commands.
Solutions Enabler
TimeFinder
SnapVX CLI User
Guide
Dell EMC Describes how to configure and manage TimeFinder Mirror, Clone, Snap, VP Snap environments for
Solutions Enabler Enginuity and HYPERMAX OS using SYMCLI commands.
TimeFinder
Family (Mirror,
Clone, Snap, VP
Snap) Version
8.2 and higher
CLI User Guide
Dell EMC Provides Storage Resource Management (SRM) information that is related to various data objects and
Solutions Enabler data handling facilities.
SRM CLI User
Guide
Dell EMC SRDF/ Describes how to install, configure, and manage SRDF/Metro using vWitness.
Metro vWitness
Configuration
Guide
Dell EMC Events Documents the SYMAPI daemon messages, asynchronous errors and message events, SYMCLI return
and Alerts for codes, and how to configure event logging.
PowerMax and
VMAX User Guide
PowerPath documents:
PowerPath/VE Describes any new or modified features and any known limitations.
for VMware
vSphere Release
Notes
PowerPath/VE Shows how to install, configure, and manage PowerPath/VE.
for VMware
vSphere
Installation and
Administration
Guide
PowerPath Documents the PowerPath CLI commands and system messages.
Family CLI
and System
Messages
Reference
PowerPath Provides a description of the products in the PowerPath family.
Family Product
Guide
PowerPath Shows how to install and configure the PowerPath Management Appliance.
Management
12 Preface
Appliance
Installation and
Configuration
Guide
PowerPath Describes new features and any known limitations.
Management
Appliance
Release Notes
PowerPath Shows how to carry out data migration using the PowerPath Migration Enabler.
Migration
Enabler User
Guide
Dell EMC Describes the new features and identify any known functionality restrictions and performance issues that
PowerMax eNAS may exist in the current version.
Release Notes
Dell EMC Describes how to configure eNAS on a PowerMax storage system.
PowerMax eNAS
Quick Start
Guide
Dell EMC How to install and use File Auto Recovery with SRDF/S.
PowerMax eNAS
File Auto
Recovery with
SRDF/S
Dell EMC A reference for command-line users and script programmers that provides the syntax, error codes, and
PowerMax eNAS parameters of all eNAS commands.
CLI Reference
Guide
Dell EMC Provides Storage Direct information that is related to various data objects and data handling facilities.
PowerProtect
Storage Direct
Solutions Guide
Dell EMC File Shows how to install, configure, and manage the Storage Direct File System Agent.
System Agent
Installation and
Administration
Guide
Dell EMC Shows how to install, configure, and manage the Storage Direct Database Application Agent.
Database
Application
Agent
Installation and
Administration
Guide
Dell EMC Shows how to install, configure, and manage the Storage Direct Microsoft Application Agent.
Microsoft
Application
Agent
Installation and
Administration
Guide
NOTE: ProtectPoint has been renamed to Storage Direct and it is included in PowerProtect, Data Protection Suite for
Apps, or Data Protection Suite Enterprise Software Edition.
Preface 13
Mainframe Enablers documents:
Dell EMC Describes how to install and configure Mainframe Enablers software.
Mainframe
Enablers
Installation and
Customization
Guide
Dell EMC Describes new features and any known limitations.
Mainframe
Enablers Release
Notes
Dell EMC Describes the status, warning, and error messages generated by Mainframe Enablers software.
Mainframe
Enablers
Message Guide
Dell EMC Describes how to configure VMAX system control and management using the EMC Symmetrix Control
Mainframe Facility (EMCSCF).
Enablers
ResourcePak
Base for z/OS
Product Guide
Dell EMC Describes how to use AutoSwap to perform automatic workload swaps between VMAX systems when the
Mainframe software detects a planned or unplanned outage.
Enablers
AutoSwap for
z/OS Product
Guide
Dell EMC Describes how to use Consistency Groups for z/OS (ConGroup) to ensure the consistency of data
Mainframe remotely copied by SRDF in the event of a rolling disaster.
Enablers
Consistency
Groups for z/OS
Product Guide
Dell EMC Describes how to use SRDF Host Component to control and monitor remote data replication processes.
Mainframe
Enablers SRDF
Host Component
for z/OS Product
Guide
Dell EMC Describes how to use TimeFinder SnapVX and zDP to create and manage space-efficient targetless
Mainframe snaps.
Enablers
TimeFinder
SnapVX and zDP
Product Guide
Dell EMC Describes how to use TimeFinder/Clone, TimeFinder/Snap, and TimeFinder/CG to control and monitor
Mainframe local data replication processes.
Enablers
TimeFinder/
Clone Mainframe
Snap Facility
Product Guide
Dell EMC Describes how to use TimeFinder/Mirror to create Business Continuance Volumes (BCVs) which can
Mainframe then be established, split, reestablished and restored from the source logical volumes for backup, restore,
Enablers decision support, or application testing.
TimeFinder/
Mirror for z/OS
Product Guide
14 Preface
Dell EMC Describes how to use the TimeFinder Utility to condition volumes and devices.
Mainframe
Enablers
TimeFinder
Utility for z/OS
Product Guide
Dell EMC GDDR Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate business recovery
for SRDF/S following both planned outages and disaster situations.
with ConGroup
Product Guide
Dell EMC GDDR Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate business recovery
for SRDF/S following both planned outages and disaster situations.
with AutoSwap
Product Guide
Dell EMC GDDR Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate business recovery
for SRDF/Star following both planned outages and disaster situations.
Product Guide
Dell EMC GDDR Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate business recovery
for SRDF/Star following both planned outages and disaster situations.
with AutoSwap
Product Guide
Dell EMC GDDR Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate business recovery
for SRDF/SQAR following both planned outages and disaster situations.
with AutoSwap
Product Guide
Dell EMC GDDR Describes how to use Geographically Dispersed Disaster Restart (GDDR) to automate business recovery
for SRDF/A following both planned outages and disaster situations.
Product Guide
Dell EMC GDDR Describes the status, warning, and error messages generated by GDDR.
Message Guide
Dell EMC GDDR Describes new features and any known limitations.
Release Notes
Dell EMC GDDR Describes the basic concepts of Dell EMC Geographically Dispersed Disaster Restart (GDDR), how to
for Star-A install it, and how to implement its major features and facilities.
Product Guide
Dell EMC z/OS Describes how to use z/OS Migrator to perform volume mirror and migrator functions as well as logical
Migrator Product migration functions.
Guide
Dell EMC Describes the status, warning, and error messages generated by z/OS Migrator.
z/OS Migrator
Message Guide
Dell EMC z/OS Describes new features and any known limitations.
Migrator Release
Notes
z/TPF documents:
Dell EMC Describes how to configure VMAX system control and management in the z/TPF operating environment.
ResourcePak for
z/TPF Product
Guide
Dell EMC SRDF Describes how to perform remote replication operations in the z/TPF operating environment.
Controls for
Preface 15
z/TPF Product
Guide
Dell EMC Describes how to perform local replication operations in the z/TPF operating environment.
TimeFinder
Controls for
z/TPF Product
Guide
Dell EMC z/TPF Describes new features and any known limitations.
Suite Release
Notes
Typographical conventions
Dell EMC uses the following type style conventions in this document:
Product Dell EMC technical support, documentation, release notes, software updates, or information about
information Dell EMC products can be obtained at https://www.dell.com/support/home (registration required) or
https://www.dellemc.com/en-us/documentation/vmax-all-flash-family.htm.
Technical To open a service request through the Dell EMC Online Support (https://www.dell.com/support/home)
support site, you must have a valid support agreement. Contact your Dell EMC sales representative for details
about obtaining a valid support agreement or to answer any questions about your account.
Additional ● Support by Product — Dell EMC offers consolidated, product-specific information on the Web at:
support options https://support.EMC.com/products
The Support by Product web pages offer quick links to Documentation, White Papers, Advisories
(such as frequently used Knowledgebase articles), and Downloads, as well as more dynamic content,
such as presentations, discussion, relevant Customer Support Forum entries, and a link to Dell EMC
Live Chat.
● Dell EMC Live Chat — Open a Chat or instant message session with a Dell EMC Support Engineer.
16 Preface
e-Licensing To activate your entitlements and obtain your license files, go to the Service Center on Dell EMC Online
support Support (https://www.dell.com/support/home). Follow the directions on your License Authorization
Code (LAC) letter that is emailed to you.
● Expected functionality may be unavailable because it is not licensed. For help with missing or incorrect
entitlements after activation, contact your Dell EMC Account Representative or Authorized Reseller.
● For help with any errors applying license files through Solutions Enabler, contact the Dell EMC
Customer Support Center.
● Contact the Dell EMC worldwide LIcensing team if you are missing a LAC letter or require further
instructions on activating your licenses through the Online Support site.
○ [email protected]
○ North America, Latin America, APJK, Australia, New Zealand: SVC4EMC (800-782-4362) and
follow the voice prompts.
○ EMEA: +353 (0) 21 4879862 and follow the voice prompts.
Your comments
Your suggestions help improve the accuracy, organization, and overall quality of the documentation. Send your comments and
feedback to: [email protected]
Preface 17
1
PowerMax with PowerMaxOS
This chapter introduces PowerMax systems and the PowerMaxOS operating environment.
Topics:
• Introduction to PowerMax with PowerMaxOS
• Software packages
• PowerMaxOS
Hardware expansion
Customers can increase the initial storage capacity in 13 TBu units each known as a Flash Capacity Pack (in an open systems
environment) or a zFlash Capacity Pack (in a mainframe environment). The addition of Flash Capacity Packs or zFlash Capacity
Packs to an array is known as scaling up.
Also, customers can add further PowerMax Bricks or PowerMax zBricks to increase the capacity and capability of the system.
A PowerMax 2000 array can have a maximum of two PowerMax Bricks. A PowerMax 8000 can have a maximum of eight
PowerMax Bricks or PowerMax zBricks. The addition of bricks to an array is known as scaling out.
Finally, customers can increase the internal memory of the system. A PowerMax 2000 system can have 512 GB, 1 TB, or 2 TB of
memory on each engine. A PowerMax 8000 system can have 1 TB or 2 TB of memory on each engine.
Storage devices
Starting with PowerMaxOS 5978.444.444 there are two types of storage devices available for a PowerMax array:
● NVMe flash drive
System specifications
Detailed specifications of the PowerMax arrays are available from the Dell EMC website.
Software packages
There are four software packages for PowerMax arrays. The Essentials and Pro software packages are for open system arrays
while the zEssentials and zPro software packages are for mainframe arrays.
Standard features
The standard features in the Essentials software package are:
Standard features
The Pro software package contains all the standard features of the Essentials software package plus:
a. The Pro software package contains 75 PowerPath licenses. Extra licenses are available separately.
Optional features
The optional features of the Pro software package are:
Standard features
The standard features in the zEssentials software package are:
Optional Features
The optional features in the zEssentials software package are:
Standard features
The zPro software package contains all the standard features of the zEssentials software package plus:
Optional features
The optional features in the zPro software package are:
Package availability
The availability of the PowerMaxOS software packages on the PowerMax platforms is:
PowerMaxOS
This section summarizes the main features of PowerMaxOS.
PowerMaxOS emulations
PowerMaxOS provides emulations (executables) that perform specific data service and control functions in the PowerMaxOS
environment. The available emulations are:
a. The 16 Gb/s module autonegotiates to 16/8/4 Gb/s using optical SFP and OM2/OM3/OM4 cabling.
b. Only on PowerMax 8000 arrays.
c. Available on PowerMax arrays only.
d. The 32 Gb/s module autonegotiates to 32/16/8 Gb/s.
Embedded Management
The eManagement container application embeds management software (Solutions Enabler, SMI-S, Unisphere for PowerMax) on
the storage array, enabling you to manage the array without requiring a dedicated management host.
With eManagement, you can manage a single storage array and any SRDF attached arrays. To manage multiple storage arrays
with a single control pane, use the traditional host-based management interfaces: Unisphere and Solutions Enabler. To this end,
eManagement allows you to link-and-launch a host-based instance of Unisphere.
eManagement is typically preconfigured and enabled at the factory. However, eManagement can be added to arrays in the field.
Contact your support representative for more information.
Embedded applications require system memory. The following table lists the amount of memory unavailable to other data
services.
a. Data Movers are added in pairs and must have the same configuration.
b. The PowerMax 8000 can be configured through Sizer with a maximum of four Data Movers. However, six and eight Data
Movers can be ordered by RPQ. As the number of data movers increases, the maximum number of I/O cards , logical
cores, memory, and maximum capacity also increases.
c. For 2, 4, 6, and 8 Data Movers, respectively.
d. A single 2-port 10GbE Optical I/O module is required by each Data Mover for initial PowerMax configurations. However,
that I/O module can be replaced with a different I/O module (such as a 4-port 1GbE or 2-port 10GbE copper) using the
normal replacement capability that exists with any eNAS Data Mover I/O module. Also, additional I/O modules can be
configured through a I/O module upgrade/add as long as standard rules are followed (no more than three I/O modules per
Data Mover, all I/O modules must occupy the same slot on each director on which a Data Mover resides).
RAID levels
PowerMax arrays can use the following RAID levels:
● PowerMax 2000: RAID 5 (7+1) (Default), RAID 5 (3+1), RAID 6 (6+2), and RAID 1
● PowerMax 8000: RAID 5 (7+1), RAID 6 (6+2), and RAID 1
D@RE components
Embedded D@RE uses the following components, all of which reside on the primary Management Module Control Station
(MMCS):
● Dell EMC Key Trust Platform (KTP) (embedded)—This component adds embedded key management functionality to the
KMIP Client.
● Lockbox—Hardware- and software-specific encrypted repository that securely stores passwords and other sensitive key
manager configuration information. The lockbox binds to a specific MMCS.
External D@RE uses the same components as embedded D@RE, and adds the following:
● Dell EMC Key Trust Platform (KTP)—Also known as the KMIP Client, this component resides on the MMCS and
communicates with external key managers using the OASIS Key Management Interoperability Protocol (KMIP) to manage
encryption keys.
● External Key Manager—Provides centralized encryption key management capabilities such as secure key generation,
storage, distribution, audit, and enabling Federal Information Processing Standard (FIPS) 140-2.
● Cluster/Replication Group—Multiple external key managers sharing configuration settings and encryption keys.
Configuration and key lifecycle changes made to one node are replicated to all members within the same cluster or
replication group.
Key protection
The local keystore file is encrypted with a 256-bit AES key derived from a randomly generated password file. This password file
is secured in the Lockbox. The Lockbox is protected using MMCS-specific stable system values (SSVs) of the primary MMCS.
These are the same SSVs that protect Secure Service Credentials (SSC).
Compromising the MMCS drive or copying Lockbox and keystore files off the array causes the SSV tests to fail. Compromising
the entire MMCS only gives an attacker access if they also successfully compromise SSC.
Key operations
D@RE provides a separate, unique Data Encryption Key (DEK) for each physical drive in the array, including spare drives. To
ensure that D@RE uses the correct key for a given drive:
● DEKs stored in the array include a unique key tag and key metadata when they are wrapped (encrypted) for use by the
array. This information is included with the key material when the DEK is wrapped (encrypted) for use in the array.
● During encryption I/O, the expected key tag associated with the drive is supplied separately from the wrapped key.
● During key unwrap, the encryption hardware checks that the key unwrapped correctly and that it matches the supplied key
tag.
● Information in a reserved system LBA (Physical Information Block, or PHIB) verifies the key used to encrypt the drive and
ensures the drive is in the correct location.
● During initialization, the hardware performs self-tests to ensure that the encryption/decryption logic is intact. The self-test
prevents silent data corruption due to encryption hardware failures.
Audit logs
The audit log records major activities on an array, including:
● Host-initiated actions
● Physical component changes
● Actions on the MMCS
● D@RE key management events
● Attempts blocked by security controls (Access Controls)
The Audit Log is secure and tamper-proof so event contents cannot be altered. Users with the Auditor access can view, but not
modify, the log.
Data erasure
Dell EMC Data Erasure uses specialized software to erase information on arrays. It mitigates the risk of information
dissemination, and helps secure information at the end of the information lifecycle. Data erasure:
● Protects data from unauthorized access
● Ensures secure data migration by making data on the source array unreadable
● Supports compliance with internal policies and regulatory requirements
Data Erasure overwrites data at the lowest application-addressable level to drives. The number of overwrites is configurable
from three (the default) to seven with a combination of random patterns on the selected arrays.
An optional certification service is available to provide a certificate of erasure. Drives that fail erasure are delivered to customers
for final disposal.
For individual flash drives, Secure Erase operations erase all physical flash areas on the drive which may contain user data.
The available data erasure services are:
● Dell EMC Data Erasure for Full Arrays—Overwrites data on all drives in the system when replacing, retiring or re-purposing
an array.
● Dell EMC Data Erasure/Single Drives—Overwrites data on individual drives.
● Dell EMC Disk Retention—Enables organizations that must retain all media to retain failed drives.
● Dell EMC Assessment Service for Storage Security—Assesses your information protection policies and suggests a
comprehensive security strategy.
All erasure services are performed on-site in the security of the customer’s data center and include a Data Erasure Certificate
and report of erasure results.
Vault to flash
PowerMax arrays initiate a vault operation when the system is powered down, goes offline, or if environmental conditions occur,
such as the loss of a data center due to an air conditioning failure.
Each array comes with Standby Power Supply (SPS) modules. On a power loss, the array uses the SPS power to write the
system mirrored cache to flash storage. Vaulted images are fully redundant; the contents of the system mirrored cache are
saved twice to independent flash storage.
Data efficiency
Data efficiency is a feature of PowerMax systems that is designed to make the best available use of the storage space on a
storage system. Data efficiency has two elements:
● Inline compression
● Deduplication
They work together to reduce the amount of storage that an individual storage group requires. The space savings achieved
through data efficiency is measured as the Data Reduction Ratio (DRR). Data efficiency operates on individual storage groups
so that a system can have a mix of storage groups that use data efficiency and those that do not.
Inline compression is a feature of storage groups. When enabled (this is the default setting), new I/O to a storage group is
compressed when written to disk, while existing data on the storage group starts to compress in the background. After turning
off compression, new I/O is no longer compressed, and existing data remains compressed until it is written again, at which time
it decompresses.
Inline compression, deduplication, and over-subscription complement each other. Over-subscription allows presenting larger
than needed devices to hosts without having the physical drives to fully allocate the space represented by the thin devices (Thin
device oversubscription on page 58 has more information on over-subscription). Inline compression further reduces the data
footprint by increasing the effective capacity of the array.
The example in Inline compression and over-subscription on page 33 shows this. Here, 1.3 PB of host attached devices
(TDEVs) is over-provisioned to 1.0 PB of back-end (TDATs), that reside on 1.0 PB of Flash drives. Following data compression,
the data blocks are compressed, by a ratio of 2:1, reducing the number of Flash drives by half. Basically, with compression
enabled, the array requires half as many drives to support a given front-end capacity.
Software compression
PowerMaxOS 5978 introduces software compression for PowerMax arrays. Software compression is an extension of regular,
inline compression and is available on PowerMax systems only. It operates on data that was previously compressed but has not
been accessed for 35 days or more. Software compression recompresses this data using an algorithm that may produce a much
greater DRR. The amount of extra compression that can be achieved depends on the nature of the data.
The criteria that software compression uses to select a data extent for recompression are:
Inline deduplication
Deduplication works in conjunction with inline compression to further improve efficiency in the use of storage space. It
reduces the number of copies of identical tracks that are stored on back-end devices. Depending on the nature of the data,
deduplication can provide additional data reduction over and above the reduction that compression provides.
The storage group is the unit that deduplication works on. When it detects a duplicated track in a group, deduplication replaces
it with a pointer to the track that already resides on back-end storage.
Availability
Deduplication is available only on PowerMax arrays that run PowerMaxOS. In addition, deduplication works on FBA data only. A
system with a mix of FBA and CKD devices can use deduplication, even when the FBA and CKD devices occupy separate SRPs.
Compatibility
Deduplication is compatible with the Dell EMC Live Optics performance analyzer. An array with deduplication can participate in a
performance study of an IT environment.
User management
Solutions Enabler or Unisphere for PowerMax have facilities to manage deduplication, including:
● Selecting the storage groups to use deduplication
● Monitoring the performance of the system
Management Interfaces on page 35 contains an overview of Solutions Enabler and Unisphere for PowerMax.
Management Interfaces 35
Table 5. Unisphere tasks (continued)
Section Allows you to:
Storage View and manage storage groups and storage tiers.
Hosts View and manage initiators, masking views, initiator groups, array host aliases, and port groups.
Data Protection View and manage local replication, monitor and manage replication pools, create and view device
groups, and monitor and manage migration sessions.
Performance Monitor and manage array dashboards, perform trend analysis for future capacity planning, and
analyze data. Set preferences, such as, general, dashboards, charts, reports, data imports, and alerts
for performance management tasks.
Databases Troubleshoot database and storage issues, and launch Database Storage Analyzer.
System View and display dashboards, active jobs, alerts, array attributes, and licenses.
Events View alerts, the job list, and the audit log.
Support View online help for Unisphere tasks.
Unisphere also has a Representational State Transfer (REST) API. With this API you can access performance and configuration
information, and provision storage arrays. You can use the API in any programming environment that supports standard REST
clients, such as web browsers and programming platforms that can issue HTTP requests.
Workload Planner
Workload Planner displays performance metrics for applications. Use Workload Planner to:
● Model the impact of migrating a workload from one storage system to another.
● Model proposed new workloads.
● Assess the impact of moving one or more workloads off of a given array running PowerMaxOS.
● Determine current and future resource shortfalls that require action to maintain the requested workloads.
Unisphere 360
Unisphere 360 is an on-premises management solution that provides a single window across arrays running PowerMaxOS at a
single site. Use Unisphere 360 to:
● Add a Unisphere server to Unisphere 360 to allow for data collection and reporting of Unisphere management storage
system data.
● View the system health, capacity, alerts and capacity trends for your Data Center.
● View all storage systems from all enrolled Unisphere instances in one place.
● View details on performance and capacity.
● Link and launch to Unisphere instances running V8.2 or higher.
● Manage Unisphere 360 users and configure authentication and authorization rules.
● View details of visible storage arrays, including current and target storage.
CloudIQ
Cloud IQ is a web-based application for monitoring multiple PowerMax arrays simultaneously. However, CloudIQ is more than a
passive monitor. It uses predictive analytics to help with:
36 Management Interfaces
● Visualizing trends in capacity usage
● Predicting potential shortcomings in capacity and performance so that early action can be taken to avoid them
● Troubleshooting performance issues
CloudIQ is available with PowerMaxOS 5978.221.221 and later, and with Unisphere for PowerMax V9.0.1 and later. It is free for
customers to use.
Periodically, a data collector runs that gathers and packages data about the arrays that Unisphere manages and their
performance. The collector then sends the packaged data to CloudIQ. On receiving the data, CloudIQ unpacks it, processes
it, and makes it available to view in a UI.
CloudIQ is hosted on Dell EMC infrastructure that is secure, highly available, and fault tolerant. In addition, the infrastructure
provides a guaranteed, 4-hour disaster recovery window.
The rest of this section contains more information on CloudIQ and how it interacts with a PowerMax array.
Connectivity
The data collector communicates with CloudIQ through a Secure Remote Services (SRS) gateway. SRS uses an encrypted
connection running over HTTPS to exchange data with CloudIQ. The connection to the Secure Remote Services gateway is
either through the secondary Management Modules Control Station (MMCS) within a PowerMax array, or through a direct
connection from the management host that runs Unisphere. Connection through the MMCS requires that the array runs
PowerMaxOS 5978.444.444.
The data collector is a component of Unisphere for PowerMax. So, it is installed along with Unisphere and you manage it with
Unisphere.
Registration
Before you can monitor an array you register it with SRS using the Settings dialog in Unisphere for PowerMax. To be able to
register an array you need a current support contract with Dell EMC. Once an array is registered, data collection can begin. If
you wish you can exclude any array from data collection and hence being monitored by CloudIQ.
Data collection
The data collector gathers four categories of data and uses a different collection frequency for each category:
In the Performance category, CloudIQ displays bandwidth, latency and IOPS (I/O operations). The values are calculated from
these data items, collected from the array:
● Throughput read
● Throughput write
● Latency read
● Latency write
● IOPS read
● IOPS write
The Configuration category contains information on configuration, capacity, and efficiency for the overall array, each SRP
(Storage Resource Pool), and each storage group.
CloudIQ provides the collector with configuration data that defines the data items to collect and their collection frequency.
CloudIQ sends this configuration data once a day (at most). As CloudIQ gets new features, or enhancements to existing
features, the data it requires changes accordingly. It communicates this to the data collector in each registered array in the form
of revised configuration data.
Management Interfaces 37
Monitor facilities
CloudIQ has a comprehensive set of facilities for monitoring a storage array:
● A summary page gives an overview of the health of all the arrays.
● The systems page gives a summary of the state of each individual array.
● The details gives information about an individual array, its configuration, storage capacity, performance, and health.
● The health center provides details of the alerts that individual arrays have raised.
● The hosts page lists host systems connected to the monitored arrays.
The health score can help you see where the most severe health issues are, based on five core factors, shown in the following
table.
The differentiator for CloudIQ, however, is the use of predictive analytics. CloudIQ analyzes the data it has received from each
array to determine the normal range of values for various metrics. Using this it can highlight when the metric goes outside of
this normal range.
Support services
SRS provides more facilities than simply sending data from an array to CloudIQ:
● An array can automatically open service requests for critical issues that arise.
● Dell EMC support staff can access the array to troubleshoot critical issues and to obtain diagnostic information such as log
and dump files.
Security
Each customer with access to CloudIQ has a dedicated access portal through which they can view their own arrays only. A
customer does not have access to any other customer's arrays or data. In addition, SRS uses point-to-point encryption over a
dedicated VPN, multi-factor authentication, customer-controlled access policies, and RSA digital certificates to ensure that all
customer data is securely transported to Dell EMC.
The infrastructure that CloudIQ uses is regularly scanned for vulnerabilities with remediation taking place as a result of these
scans. This helps to maintain the security and privacy of all customer data.
CyberSecIQ
CyberSecIQ is an as a service cloud-based storage security analytics application that provides security assessment and
measures the overall cyber security risk level of storage systems using intelligent, comprehensive, and predictive analytics.
CyberSecIQ uses Secure Remote Services to collect system logs, system configurations, security configurations and settings,
alerts, and performance metrics from the Unisphere system.
Prerequisites for the application include:
● The Secure Remote Services gateway has already been registered in Unisphere.
● The Secure Remote Services gateway must be directly connected to Unisphere.
38 Management Interfaces
● Sending data to CloudIQ setting must be enabled.
● There must be at least one local array in Unisphere.
Solutions Enabler
Solutions Enabler provides a comprehensive command line interface (SYMCLI) to manage your storage environment.
SYMCLI commands are invoked from a management host, either interactively on the command line, or using scripts.
SYMCLI is built on functions that use system calls to generate low-level I/O SCSI commands. Configuration and status
information is maintained in a host database file, reducing the number of enquiries from the host to the arrays.
Use SYMCLI to:
● Configure array software (for example, TimeFinder, SRDF, Open Replicator)
● Monitor device configuration and status
● Perform control operations on devices and data objects
Solutions Enabler also has a Representational State Transfer (REST) API. Use this API to access performance and configuration
information, and provision storage arrays. It can be used in any programming environment that supports standard REST clients,
such as web browsers and programming platforms that can issue HTTP requests.
Mainframe Enablers
The Dell EMC Mainframe Enablers are software components that allow you to monitor and manage arrays running PowerMaxOS
in a mainframe environment:
● ResourcePak Base for z/OS
Enables communication between mainframe-based applications (provided by Dell EMC or independent software vendors)
and PowerMax/VMAX arrays.
● SRDF Host Component for z/OS
Monitors and controls SRDF processes through commands executed from a host. SRDF maintains a real-time copy of data at
the logical volume level in multiple arrays located in physically separate sites.
● Dell EMC Consistency Groups for z/OS
Ensures the consistency of data remotely copied by SRDF feature in the event of a rolling disaster.
● AutoSwap for z/OS
Handles automatic workload swaps between arrays when an unplanned outage or problem is detected.
● TimeFinder SnapVX
With Mainframe Enablers V8.0 and higher, SnapVX creates point-in-time copies directly in the Storage Resource Pool (SRP)
of the source device, eliminating the concepts of target devices and source/target pairing. SnapVX point-in-time copies
are accessible to the host through a link mechanism that presents the copy on another device. TimeFinder SnapVX and
PowerMaxOS support backward compatibility to traditional TimeFinder products, including TimeFinder/Clone, TimeFinder VP
Snap, and TimeFinder/Mirror.
● Data Protector for z Systems (zDP™)
With Mainframe Enablers V8.0 and higher, zDP is deployed on top of SnapVX. zDP provides a granular level of application
recovery from unintended changes to data. zDP achieves this by providing automated, consistent point-in-time copies of
data from which an application-level recovery can be conducted.
● TimeFinder/Clone Mainframe Snap Facility
Produces point-in-time copies of full volumes or of individual datasets. TimeFinder/Clone operations involve full volumes or
datasets where the amount of data at the source is the same as the amount of data at the target. TimeFinder VP Snap
leverages clone technology to create space-efficient snaps for thin devices.
● TimeFinder/Mirror for z/OS
Allows the creation of Business Continuance Volumes (BCVs) and provides the ability to ESTABLISH, SPLIT, RE-ESTABLISH
and RESTORE from the source logical volumes.
● TimeFinder Utility
Management Interfaces 39
Conditions SPLIT BCVs by relabeling volumes and (optionally) renaming and recataloging datasets. This allows BCVs to be
mounted and used.
SMI-S Provider
Dell EMC SMI-S Provider supports the SNIA Storage Management Initiative (SMI), an ANSI standard for storage management.
This initiative has developed a standard management interface that resulted in a comprehensive specification (SMI-Specification
or SMI-S).
SMI-S defines the open storage management interface, to enable the interoperability of storage management technologies from
multiple vendors. These technologies are used to monitor and control storage resources in multivendor or SAN topologies.
Solutions Enabler components required for SMI-S Provider operations are included as part of the SMI-S Provider installation.
VASA Provider
The VASA Provider enables PowerMax management software to inform vCenter of how VMDK storage, including vVols, is
configured and protected. These capabilities are defined by Dell EMC and include characteristics such as disk type, type of
provisioning, storage tiering and remote replication status. This allows vSphere administrators to make quick and informed
decisions about virtual machine placement. VASA offers the ability for vSphere administrators to complement their use of
plugins and other tools to track how devices hosting vVols are configured to meet performance and availability needs. Details
about VASA Provider replication groups can be viewed on the Unisphere vVols dashboard.
40 Management Interfaces
● Monitor and analyze configurations and capacity growth
● Optimize your environment to improve return on investment
Virtualization enables businesses to simplify management, control costs, and guarantee uptime. However, virtualized
environments also add layers of complexity to the IT infrastructure that reduce visibility and can complicate the management
of storage resources. SRM addresses these layers by providing visibility into the physical and virtual relationships to ensure
consistent service levels.
As you build out a cloud infrastructure, SRM helps you ensure storage service levels while optimizing IT resources — both key
attributes of a successful cloud deployment.
SRM is designed for use in heterogeneous environments containing multi-vendor networks, hosts, and storage devices. The
information it collects and the functionality it manages can reside on technologically disparate devices in geographically diverse
locations. SRM moves a step beyond storage management and provides a platform for cross-domain correlation of device
information and resource topology, and enables a broader view of your storage environment and enterprise data center.
SRM provides a dashboard view of the storage capacity at an enterprise level through Watch4net. The Watch4net dashboard
view displays information to support decisions regarding storage capacity.
The Watch4net dashboard consolidates data from multiple ProSphere instances spread across multiple locations. It gives a quick
overview of the overall capacity status in the environment, raw capacity usage, usable capacity, used capacity by purpose,
usable capacity by pools, and service levels.
SRDF/Cluster Enabler
Cluster Enabler (CE) for Microsoft Failover Clusters is a software extension of failover clusters functionality. Cluster Enabler
enables Windows Server 2012 (including R2) Standard and Datacenter editions running Microsoft Failover Clusters to operate
across multiple connected storage arrays in geographically distributed clusters.
Management Interfaces 41
SRDF/Cluster Enabler (SRDF/CE) is a software plug-in module to Dell EMC Cluster Enabler for Microsoft Failover Clusters
software. The Cluster Enabler plug-in architecture consists of a CE base module component and separately available plug-in
modules, which provide your chosen storage replication technology.
SRDF/CE supports:
● Synchronous and asynchronous mode (SRDF modes of operation on page 81 summarizes these modes)
● Concurrent and cascaded SRDF configurations (SRDF multi-site solutions on page 76 summarizes these configurations)
Extended features
SRDF/TimeFinder Manager for IBM i extended features provide support for the IBM independent ASP (IASP) functionality.
IASPs are sets of switchable or private auxiliary disk pools (up to 223) that can be brought online/offline on an IBM i host
without affecting the rest of the system.
When combined with SRDF/TimeFinder Manager for IBM i, IASPs let you control SRDF or TimeFinder operations on arrays
attached to IBM i hosts, including:
● Display and assign TimeFinder SnapVX devices.
● Execute SRDF or TimeFinder commands to establish and split SRDF or TimeFinder devices.
● Present one or more target devices containing an IASP image to another host for business continuance (BC) processes.
Access to extended features control operations include:
● From the SRDF/TimeFinder Manager menu-driven interface.
● From the command line using SRDF/TimeFinder Manager commands and associated IBM i commands.
42 Management Interfaces
AppSync
Dell EMC AppSync offers a simple, SLA-driven, self-service approach for protecting, restoring, and cloning critical Microsoft
and Oracle applications and VMware environments. After defining service plans, application owners can protect, restore, and
clone production data quickly with item-level granularity by using the underlying Dell EMC replication technologies. AppSync also
provides an application protection monitoring service that generates alerts when the SLAs are not met.
AppSync supports the following applications and storage arrays:
● Applications—Oracle, Microsoft SQL Server, Microsoft Exchange, and VMware vStorage VMFS and NFS datastores and File
systems.
● Replication Technologies—SRDF, SnapVX, RecoverPoint, XtremIO Snapshot, VNX Advanced Snapshots, VNXe Unified
Snapshot, and ViPR Snapshot.
On PowerMax arrays:
● The Essentials software package contains AppSync in a starter bundle. The AppSync Starter Bundle provides the license
for a scale-limited, yet fully functional version of AppSync. For more information, see the AppSync Starter Bundle with
PowerMax Product Brief available on the Dell EMC Online Support Website.
● The Pro software package contains the AppSync Full Suite.
Management Interfaces 43
3
Open Systems Features
This chapter introduces the open systems features of PowerMax arrays.
Topics:
• PowerMaxOS support for open systems
• PowerPath
• Backup and restore using PowerProtect Storage Direct and Data Domain
• VMware Virtual Volumes
PowerPath
PowerPath runs on an application host and manages data paths between the host and LUNs on a storage array. PowerPath is
available for various operating systems including AIX, Microsoft Windows, Linux, and VMware.
This section is high-level summary of the PowerPath capabilities for PowerMax arrays. It also shows where to get detailed
information including instructions on how to install, configure, and manage PowerPath.
Operational overview
A data path is a physical connection between an application host and a LUN on a PowerMax array. The path has several
components including:
● Host-based adapter (HBA) port
● Cables
● Switches, PowerMax port
● The LUN
PowerPath manages the use of the paths between a host and a LUN to optimize their use and to take corrective action should
an error occurs.
There can be multiple paths to a LUN enabling PowerPath to:
● Balance the I/O load across the available paths. In turn, this:
○ Optimizes the use of the paths
○ Improves overall I/O performance
○ Reduces management intervention
○ Eliminates the need to configure paths manually
Host registration
Each host that uses PowerPath to access an array registers itself with the array. The information that PowerPath sends to the
array is:
● Host name
● Operating system and version
● Hardware
● PowerPath verson
● Name of the cluster the host is part of and the host's cluster name (if applicable)
● WWN of the host
● Name of each VM on the host and the operating system that each runs
The array stores this information in memory.
PowerPath repeats the registration process every 24 hours. In addition, it checks the host information at hourly intervals. If the
name or IP address of the host have changed, PowerPath repeats the registration process with the array immediately.
Rather than wait for the next registration check to occur, a system administrator can register a change immediately using the
PowerPath CLI. If necessary a site can control whether automatic registration occurs both for an individual host and for an
entire array.
In addition, the array deletes information on any host that has not registered over the last 72 hours. This prevents a build up of
out-of-date host data.
Device status
In addition to host information, PowerPath sends device information to the array. The device information includes:
● Date of last usage
● Mount status
● Name of the process that owns the device
● PowerPath I/O statistics (these are in addition to the I/O statistics that the array itself gathers)
The array stores this information in memory.
Benefits of the device information include:
● Early identification of potential I/O problems
● Better long-term planning of array and host usage
● Recover and redeploy unused storage assets
Management
Solutions Enabler and Unisphere have facilities to:
● View host information
● View device information
More information
There is more information about PowerPath, how to configure it, and manage it in:
● PowerPath Installation and Administration Guide
● PowerPath Release Notes
● PowerPath Family Product Guide
● PowerPath Family CLI and System Messages Reference
● PowerPath Management Appliance Installation and Configuration Guide
● PowerPath Management Appliance Release Notes
There are Installation and Administration Guide and Release Notes documents for each supported operating system.
Backup
A LUN is the basic unit of backup in Storage Direct. For each LUN, Storage Direct creates a backup image on the Data Domain
array. You can group backup images to create a backup set. One use of the backup set is to capture all the data for an
application as a point-in-time image.
Backup process
To create a backup of a LUN, Storage Direct:
1. Uses SnapVX to create a local snapshot of the LUN on the PowerMax array (the primary storage array).
After the snapshot is created, Storage Direct and the application can proceed independently each other and the backup
process has no further impact on the application.
2. Copies the snapshot to a vdisk on the Data Domain array where it is deduplicated and cataloged.
On the primary storage array, the vdisk appears as a FAST.X encapsulated LUN. The copy of the snapshot to the vdisk uses
existing SnapVX link copy and PowerMax destaging technologies.
When the vdisk contains all the data for the LUN, Data Domain converts the data into a static image. This image then has
metadata added to it and Data Domain catalogs the resultant backup image.
Restore
Storage Direct provides two forms of data restore:
● Object level restore from a selected backup image
● Full application rollback restore
File system agent Provides facilities to back up, manage, and restore application LUNs.
More information
More information about Storage Direct, its components, how to configure them, and how to use them is available in:
● PowerProtect Storage Direct Solutions Guide
● File System Agent Installation and Administration Guide
● Database Application Agent Installation and Administration Guide
● Microsoft Application Agent Installation and Administration Guide
vVol scalability
The vVol scalability limits are:
a. vVol Snapshots are managed through vSphere only. You cannot use Unisphere or Solutions Enabler to create them.
vVol workflow
Requirements
Install and configure these applications:
Procedure
The creation of a vVol-based virtual machine involves both the storage administrator and the VMware administrator:
Storage The storage administrator uses Unisphere or Solutions Enabler to create the storage and present it to the
administrator VMware environment:
1. Create one or more storage containers on the storage array.
This step defines how much storage and from which service level the VMware user can provision.
2. Create Protocol Endpoints and provision them to the ESXi hosts.
VMware The VMware administrator uses the vSphere Web Client to deploy the VM on the storage array:
administrator 1. Add the VASA Provider to the vCenter.
This allows vCenter to communicate with the storage array,
2. Create a vVol datastore from the storage container.
3. Create the VM storage policies.
4. Create the VM in the vVol datastore, selecting one of the VM storage policies.
Mainframe Features 51
● Dynamic volume expansion for 3390 TDEVs, including devices that are part of an SRDF (except SRDF/Metro), SnapVX,
Concurrent Copy, or SDDF configuration.
● Persistent IU Pacing (Extended Distance FICON)
● HyperPAV
● SuperPAV
● PDS Search Assist
● Modified Indirect Data Address Word (MIDAW)
● Multiple Allegiance (MA)
● Sequential Data Striping
● Multi-Path Lock Facility
● Product Suite for z/TPF
● HyperSwap
● Secure Snapsets in SnapVX for zDP
● Global Mirror
● Transparent Cloud Tiering
NOTE: A PowerMax array can participate in a z/OS Global Mirror (XRC) configuration only as a secondary.
52 Mainframe Features
Transparent Cloud Tiering support
PowerMax (or VMAX) can be used in an IBM Transparent Cloud Tiering (TCT) environment to store DASD files in, or retrieve
them from, the cloud with minimal CPU requirements on the z/OS host.
PowerMax TCT support uses the Dell EMC Disk Library for mainframe (DLm) as its 'cloud'. Count Key Data (CKD) extents are
stored as files on standard 3590 tape volumes. The DLm 3590 tape volumes and the DLm tape drives for TCT are separate from
any z/OS-defined tape volumes and tape drives. TCT 3590 tapes are not accessible to, or managed, by z/OS. Tape volumes are
written to the DLm from the PowerMax over a FICON connection. DLm then stores the data on any backend storage that DLm
supports. Optionally, the DLm Long-Term Retention feature can then be used, independent of TCT, to move the data to a Dell
EMC Elastic Cloud Storage (ECS) solution.
A cloud object store is required for TCT support to operate. This cloud must support the OpenStack SWIFT protocol and is used
to store cloud metadata. If ECS is used as the cloud, the ECS can be the same or a different ECS from any ECS deployed with
the DLm.
A REST API proxy server is required in each z/OS image accessing a TCT-enabled PowerMax. This proxy server runs as a
separate address space in z/OS.
The ResourcePak Base for z/OS Product Guide provides additional information about TCT support, including TCT support
requirements and restrictions. It also discusses how to set up and run the Dell EMC REST API proxy and the Dell EMC REST API
utility.
Mainframe Features 53
Figure 6. TCT environment with PowerMax and DLm
a. A split is a logical partition of the storage array, identified by unique devices, SSIDs, and host serial number. The maximum
storage array host address per array is inclusive of all splits.
54 Mainframe Features
The following table lists the maximum LPARs per port based on the number of LCUs with active paths:
Cascading configurations
Cascading configurations greatly enhance FICON connectivity between local and remote sites by using switch-to-switch
extensions of the CPU to the FICON network. These cascaded switches communicate over long distances using a small number
of high-speed lines called interswitch links (ISLs). A maximum of two switches may be connected together within a path
between the CPU and the storage array.
Use of the same switch vendors is required for a cascaded configuration. To support cascading, each switch vendor requires
specific models, hardware features, software features, configuration settings, and restrictions. Specific IBM CPU models,
operating system release levels, host hardware, and PowerMaxOS levels are also required.
The Dell EMC Support Matrix, available through E-Lab Interoperability Navigator (ELN) at http://elabnavigator.emc.com has
the most up-to-date information on switch support.
Mainframe Features 55
5
Provisioning
This chapter introduces storage provisioning.
Topics:
• Thin provisioning
• Multi-array provisioning
Thin provisioning
PowerMax arrays are configured in the factory with thin provisioning pools ready for use. Thin provisioning improves capacity
utilization and simplifies storage management. It also enables storage to be allocated and accessed on demand from a pool of
storage that services one or many applications. LUNs can be “grown” over time as space is added to the data pool with no
impact to the host or application. Data is widely striped across physical storage (drives) to deliver better performance than
standard provisioning.
NOTE: Data devices (TDATs) are provisioned/pre-configured/created while the host addressable storage devices TDEVs
are created by either the customer or customer support, depending on the environment.
Thin provisioning increases capacity utilization and simplifies storage management by:
● Enabling more storage to be presented to a host than is physically consumed
● Allocating storage only as needed from a shared thin provisioning pool
● Making data layout easier through automated wide striping
● Reducing the steps required to accommodate growth
Thin provisioning allows you to:
● Create host-addressable thin devices (TDEVs) using Unisphere or Solutions Enabler
● Add the TDEVs to a storage group
● Run application workloads on the storage groups
When hosts write to TDEVs, the physical storage is automatically allocated from the default Storage Resource Pool.
56 Provisioning
Pre-configuration for thin provisioning
PowerMax arrays are custom-built and pre-configured with array-based software applications, including a factory pre-
configuration for thin provisioning that includes:
● Data devices (TDAT) — an internal device that provides physical storage used by thin devices.
● Virtual provisioning pool — a collection of data devices of identical emulation and protection type, all of which reside on
drives of the same technology type and speed. The drives in a data pool are from the same disk group.
● Disk group— a collection of physical drives within the array that share the same drive technology and capacity. RAID
protection options are configured at the disk group level. Dell Technologies strongly recommends that you use one or more
of the RAID data protection schemes for all data devices.
● Storage Resource Pools — one (default) Storage Resource Pool is pre-configured on the array. This process is automatic
and requires no setup. You cannot modify Storage Resource Pools, but you can list and display their configuration. You can
also generate reports detailing the demand storage groups are placing on the Storage Resource Pools.
Thin devices (TDEVs) have no storage allocated until the first write is issued to the device. Instead, the array allocates only a
minimum allotment of physical storage from the pool, and maps that storage to a region of the thin device including the area
targeted by the write.
These initial minimum allocations are performed in units called thin device extents. Each extent for a thin device is 1 track (128
KB).
When a read is performed on a device, the data being read is retrieved from the appropriate data device to which the thin
device extent is allocated. Reading an area of a thin device that has not been mapped does not trigger allocation operations.
Reading an unmapped block returns a block in which each byte is equal to zero.
Provisioning 57
When more storage is required to service existing or future thin devices, data devices can be added to existing thin storage
groups.
58 Provisioning
When a masking view is created, the necessary mapping and masking operations are performed automatically to provision
storage.
After a masking view exists, any changes to its grouping of initiators, ports, or storage devices automatically propagate
throughout the view, automatically updating the mapping and masking as required.
Masking view
Initiator group
VM 1 VM
VM 1 2 VM
VM 2 3 VM
VM 3 4
VM 4
HBA 22
HBA 33
HBA 44
HBA 11
ESX
HBA
HBA
HBA
HBA
2
1
Host initiators
Port group
Ports
dev
dev
dev dev
dev
dev
dev
dev
dev Storage group
Devices
SYM-002353
Initiator group
A logical grouping of Fibre Channel initiators. An initiator group is limited to either a parent, which can
contain other groups, or a child, which contains one initiator role. Mixing of initiators and child name in a
group is not supported.
Port group
A logical grouping of Fibre Channel front-end director ports. A port group can contain up to 32 ports.
Storage group
A logical grouping of thin devices. LUN addresses are assigned to the devices within the storage group
when the view is created if the group is either cascaded or stand alone. Often there is a correlation
between a storage group and a host application. One or more storage groups may be assigned to
an application to simplify management of the system. Storage groups can also be shared among
applications.
Cascaded storage group
A parent storage group comprised of multiple storage groups (parent storage group members) that
contain child storage groups comprised of devices. By assigning child storage groups to the parent
storage group members and applying the masking view to the parent storage group, the masking view
inherits all devices in the corresponding child storage groups.
Masking view
An association between one initiator group, one port group, and one storage group. When a masking
view is created, the group within the view is a parent, the contents of the children are used. For
example, the initiators from the children initiator groups and the devices from the children storage
groups. Depending on the server and application requirements, each server or group of servers may have
one or more masking views that associate a set of thin devices to an application, server, or cluster of
servers.
Provisioning 59
Multi-array provisioning
The multi-array Provisioning Storage wizard simplifies the task of identifying the optimal target array and provisioning storage on
that array.
Unisphere for PowerMax 9.2 provides a system-level provisioning launch point that takes array-independent inputs (storage
group name, device count and size, and (optionally) response time target or initiator filter), selects ports that are based on
current utilization and port group best practices, and returns component impact scores for all locally connected arrays running
HYPERMAX OS 5977 or PowerMaxOS 5978.
You can also select a provisioning template and provision new storage using the wizard. Storage group capacity information
and response time targets that are already part of the provisioning template are populated when the wizard opens. The most
suitable ports (based on specified options) are selected and a list of all locally connected arrays (V3 and higher) are returned.
The list is sorted by the impact of the new workload on the target arrays.
Host I/O limits (quotas) can be used to limit the amount of Front End (FE) Bandwidth and I/O operations per second (IOPS)
that can be consumed by a set of storage volumes over a set of director ports. Host I/O limits are defined as storage group
attributes – the maximum bandwidth (in MB per second) and the maximum IOPS. The Host I/O limit for a storage group can be
either active or inactive.
60 Provisioning
6
Service levels
Service levels in PowerMaxOS enable a storage administrator to define quality-of-service criteria for individual storage groups.
Topics:
• Definition of service levels
• Use of service levels to maintain system performance
• Usage examples
• Manage service levels
Some service levels have a minimum response time (known as the floor). The floor defines the shortest time that each I/O
operation on an SG with that service level takes to complete.
The storage administrator can use the service levels to help ensure that the performance of high-priority applications is not
disrupted by lower-priority ones.
Platinum 0.8 ms No
Gold 1 ms No
Silver 3.6 ms approx. 3.6 ms
Service levels 61
Service Level Target Floor
Bronze 7.2 ms approx. 7.2 ms
Optimized Not applicable Not applicable
62 Service levels
Service level priorities
Together, the five service levels create a priority list of service levels, each providing different quality-of-service criteria:
Service levels 63
Use of service levels to maintain system performance
PowerMaxOS uses the service level property of each SG to maintain system performance.
64 Service levels
Usage examples
Here are three examples of using service levels:
● Protected application
● Service provider
● Relative application priority
Protected application
A storage administrator wants to ensure that a set of SGs is protected from the performance impact of other, noncritical
applications that use the storage array. In this case, the administrator assigns the Diamond service level to the critical SGs and
sets lower-priority service levels on all other SGs.
For instance:
● An enterprise-critical OLTP application requires almost immediate response to each I/O operation. The storage administrator
may assign the Diamond level to its SGs.
● A batch program that runs overnight has less stringent requirements. So, the storage administrator may assign the Bronze
level to its SGs.
Service provider
A provider of storage for external customers has a range of prices. Storage with lower response times is more costly than that
with longer response times. In this case, the provider uses service levels to establish SGs that provide the required range of
performance. An important part of this strategy is the use of the Silver and Bronze service levels to introduce delays even
though the storage array could provide a shorter response time.
Service levels 65
7
Automated data placement
Automated data placement is a feature of PowerMaxOS 5978.444.444 and later on PowerMax arrays. It takes advantage of the
superior performance of SCM drives to optimize access to frequently accessed data and data with high-priority service levels.
Topics:
• Environment
• Operation
• Service level biasing
• Compression and deduplication
• Availability
Environment
The performance of SCM drives is an order of magnitude better than NVMe drives. So an array that contains both types of
drive effectively has two storage tiers: the higher performance SCM drives and the NVMe drives.
Automated data placement takes advantage of the performance difference to optimize access to data that is frequently
accessed. The feature can also help to optimize access to storage groups that have higher priority service levels.
An array that contains only SCM drives or NVMe drives has only one tier of storage. So that type of array cannot use automated
data placement.
Operation
Automated data placement monitors the frequency that the application host accesses data in the array. As a piece of data
becomes accessed more frequently, automated data placement promotes that data to the SCM drives. Similarly, when a piece
of data is accessed less frequently, automated data placement relegates it to the NVMe devices. Should more data need to be
promoted but there is no available space in the SCM drives, automated data placement relegates data that has been accessed
least frequently. This algorithm ensures that the SCM drives contain the most frequently accessed data.
Availability
Automated data placement is available for arrays that contain any combination of FBA and CKD devices.
About TimeFinder
Dell EMC TimeFinder delivers point-in-time copies of volumes that can be used for backups, decision support, data warehouse
refreshes, or any other process that requires parallel access to production data.
Previous VMAX families offered multiple TimeFinder products, each with their own characteristics and use cases. These
traditional products required a target volume to retain snapshot or clone data.
PowerMaxOS and HYPERMAX OS introduce TimeFinder SnapVX which provides the best aspects of the traditional TimeFinder
offerings combined with increased scalability and ease-of-use.
TimeFinder SnapVX emulates the following legacy replication products:
● FBA devices:
○ TimeFinder/Clone
○ TimeFinder/Mirror
○ TimeFinder VP Snap
● Mainframe (CKD) devices:
○ TimeFinder/Clone
○ TimeFinder/Mirror
○ TimeFinder/Snap
○ Dell EMC Dataset Snap
○ IBM FlashCopy (Full Volume and Extent Level)
TimeFinder SnapVX dramatically decreases the impact of snapshots and clones:
● For snapshots, this is done by using redirect on write technology (ROW).
● For clones, this is done by storing changed tracks (deltas) directly in the Storage Resource Pool of the source device -
sharing tracks between snapshot versions and also with the source device, where possible.
There is no need to specify a target device and source/target pairs. SnapVX supports up to 256 snapshots per volume. Each
snapshot can have a name and an automatic expiration date.
Access to snapshots
With SnapVX, a snapshot can be accessed by linking it to a host accessible volume (known as a target volume). Target volumes
are standard PowerMax TDEVs. Up to 1024 target volumes can be linked to the snapshots of the source volumes. The 1024 links
can all be to the same snapshot of the source volume, or they can be multiple target volumes linked to multiple snapshots from
the same source volume. However, a target volume may be linked only to one snapshot at a time.
Snapshots can be cascaded from linked targets, and targets can be linked to snapshots of linked targets. There is no limit to the
number of levels of cascading, and the cascade can be broken.
SnapVX links to targets in the following modes:
● Nocopy Mode (Default): SnapVX does not copy data to the linked target volume but still makes the point-in-time image
accessible through pointers to the snapshot. The target device is modifiable and retains the full image in a space-efficient
manner even after unlinking from the point-in-time.
● Copy Mode: SnapVX copies all relevant tracks from the snapshot's point-in-time image to the linked target volume. This
creates a complete copy of the point-in-time image that remains available after the target is unlinked.
Targetless snapshots
With the TimeFinder SnapVX management interfaces you can take a snapshot of an entire PowerMax Storage Group using a
single command. With this in mind, PowerMax supports up to 64K storage groups. The number of groups is enough even in the
most demanding environment to provide one for each application. The storage group construct already exists in most cases as
they are created for masking views. TimeFinder SnapVX uses this existing structure, so reducing the administration required to
maintain the application and its replication environment.
Creation of SnapVX snapshots does not require preconfiguration of extra volumes. In turn, this reduces the amount of cache
that SnapVX snapshots use and simplifies implementation. Snapshot creation and automatic termination can easily be scripted.
The following Solutions Enabler example creates a snapshot with a 2-day retention period. The command can be scheduled to
run as part of a script to create multiple versions of the snapshot. Each snapshot shares tracks where possible with the other
snapshots and the source devices. Use a cron job or scheduler to run the snapshot script on a schedule to create up to 256
snapshots of the source volumes; enough for a snapshot every 15 minutes with 2 days of retention:
symsnapvx -sid 001 -sg StorageGroup1 -name sg1_snap establish -ttl -delta 2
If a restore operation is required, any of the snapshots created by this example can be specified.
When the storage group transitions to a restored state, the restore session can be terminated. The snapshot data is preserved
during the restore process and can be used again should the snapshot data be required for a future restore.
Secure snaps
Secure snaps prevent administrators or other high-level users from deleting snapshot data, intentionally or not. Also, Secure
snaps are also immune to automatic failure resulting from running out of Storage Resource Pool (SRP) or Replication Data
Pointer (RDP) space on the array.
When the administrator creates a secure snapshot, they assign it an expiration date and time. The administrator can express
the expiration either as a delta from the current date or as an absolute date. Once the expiration date passes, and if the
snapshot has no links, PowerMaxOS automatically deletes the snapshot. Before its expiration, administrators can only extend
NOTE: Unmount target volumes before issuing the relink command to ensure that the host operating system does not
cache any filesystem data. If accessing through VPLEX, ensure that you follow the procedure outlined in the technical note
VPLEX: Leveraging Array Based and Native Copy Technologies, available on the Dell EMC support website.
Cascading snapshots
Presenting sensitive data to test or development environments often requires that the source of the data be disguised
beforehand. Cascaded snapshots provides this separation and disguise, as shown in the following image.
If no change to the data is required before presenting it to the test or development environments, there is no need to create a
cascaded relationship.
These snapshots share allocations to the same track image whenever possible while ensuring they each continue to represent a
unique point-in-time image of the source volume. Despite the space efficiency achieved through shared allocation to unchanged
data, additional capacity is required to preserve the pre-update images of changed tracks captured by each point-in-time
snapshot.
zDP includes the secure snap facility (see Secure snaps on page 68).
The process of implementing zDP has two phases — the planning phase and the implementation phase.
● The planning phase is done in conjunction with your Dell EMC representative who has access to tools that can help size the
capacity needed for zDP if you are currently a PowerMax or VMAX All Flash user.
● The implementation phase uses the following methods for z/OS:
○ A batch interface that allows you to submit jobs to define and manage zDP.
○ A zDP run-time environment that executes under SCF to create snapsets.
For details on zDP usage, refer to the TimeFinder SnapVX and zDP Product Guide. For details on zDP usage in z/TPF, refer to
the TimeFinder Controls for z/TPF Product Guide.
Snapshot policy
The Snapshot policy feature provides snapshot orchestration at scale (1024 snaps per storage group). The feature simplifies
snapshot management for standard and cloud snapshots.
Snapshots can be used to recover from data corruption, accidental deletion, or other damage, offering continuous data
protection. A large number of snapshots can be difficult to manage. The Snapshot policy feature provides an end to end
solution to create, schedule and manage standard (local) and cloud snapshots.
The snapshot policy (Recovery Point Objective (RPO)) specifies how often the snapshot should be taken and how many of the
snapshots should be retained. The snapshot may also be specified to be secure (these snapshots cannot be terminated by users
before their time to live (TTL), derived from the snapshot policy's interval and maximum count, has expired.) Up to four policies
can be associated with a storage group, and a snapshot policy can be associated with many storage groups.
The following rules apply to snapshot policies:
● The maximum number of snapshot policies that can be created on a storage system is 20. Multiple storage groups can be
associated with a snapshot policy.
● A maximum of four snapshot policies can be associated with an individual storage group.
● A storage group or device can have a maximum of 256 manual snapshots.
● A storage group or device can have a maximum of 1024 snapshots.
● The oldest unused snapshots are removed or recycled in accordance with the specified policy max_count value.
● When devices are added to a snapshot policy storage group, snapshot policies that apply to the storage group are applied to
the added devices.
Remote replication 73
SRDF 2-site solutions
The following table describes SRDF 2-site solutions.
SRDF
TimeFinder
TimeFinder
background copy
R1 R2
Site A Site B
74 Remote replication
Table 12. SRDF 2-site solutions (continued)
Solution highlights Site topology
SRDF/Cluster Enabler (CE)
VLAN switch VLAN switch
● Integrates SRDF/S or SRDF/A with Microsoft Extended IP subnet
Failover Clusters (MSCS) to automate or semi-
automate site failover.
● Complete solution for restarting operations in
cluster environments (MSCS with Microsoft
Failover Clusters). Cluster 1 Fibre Channel Fibre Channel
● Expands the range of cluster storage and Host 1 hub/switch hub/switch Cluster 1
Host 2
management capabilities while ensuring full
protection of the SRDF remote replication.
Cluster 2
SRDF/S or SRDF/A links
Cluster 2 Host 2
Host 1
SRDF-2node2cluster.eps
Site A Site B
SRDF and VMware Site Recovery Manager Protection side Recovery side
vCenter and SRM Server vCenter and SRM Server
Completely automates storage-based disaster Solutions Enabler software Solutions Enabler software
restart operations for VMware environments in
SRDF topologies. IP Network IP Network
a. In some circumstances, using SRDF/S over distances greater than 200 km may be feasible. Contact your Dell EMC
representative for more information.
Remote replication 75
SRDF multi-site solutions
The following table describes SRDF multi-site solutions.
Concurrent SRDF
3-site disaster recovery and
advanced multi-site business F/S R2
SRD
continuity protection.
● Data on the primary site is Site B
concurrently replicated to 2 R11 adaptive copy R2
secondary sites.
● Replication to remote site Site A Site C
can use SRDF/S, SRDF/A, or
adaptive copy.
Cascaded SRDF
3-site disaster recovery and SRDF/S SRDF/A
advanced multi-site business R1 R21 R2
continuity protection.
Data on the primary site (Site Site A Site B Site C
A) is synchronously mirrored to a
secondary site (Site B), and then
asynchronously mirrored from the
secondary site to a tertiary site
(Site C).
76 Remote replication
Table 13. SRDF multi-site solutions (continued)
Solution highlights Site topology
Interfamily compatibility
SRDF supports connectivity between different operating environments and arrays. Arrays running PowerMaxOS can connect to
legacy arrays running older operating environments. In mixed configurations where arrays are running different versions, SRDF
features of the lowest version are supported.
PowerMax arrays can connect to:
● PowerMax arrays running PowerMaxOS
● VMAX 250F, 450F, 850F, and 950F arrays running HYPERMAX OS
● VMAX 100K, 200K, and 400K arrays running HYPERMAX OS
● VMAX 10K, 20K, and 40K arrays running Enginuity 5876 with an Enginuity ePack
NOTE: When you connect between arrays running different operating environments, limitations may apply. Information
about which SRDF features are supported, and applicable limitations for 2-site and 3-site solutions is in the SRDF
Interfamily Connectivity Information.
This interfamily connectivity allows you to add the latest hardware platform/operating environment to an existing SRDF
solution, enabling technology refreshes.
Remote replication 77
R1 and R2 devices
An R1 device is the member of the device pair at the source (production) site. R1 devices are generally Read/Write accessible to
the application host.
An R2 device is the member of the device pair at the target (remote) site. During normal operations, host I/O writes to the R1
device are mirrored over the SRDF links to the R2 device. In general, data on R2 devices is not available to the application host
while the SRDF relationship is active. In SRDF synchronous mode, however, an R2 device can be in Read Only mode that allows
a host to read from the R2.
In a typical environment:
● The application production host has Read/Write access to the R1 device.
● An application host connected to the R2 device has Read Only (Write Disabled) access to the R2 device.
Open systems hosts
Production host Optional remote host
R1 SRDF Links R2
Read/ Read
Write Only
R1 data copies to R2
78 Remote replication
R11 devices
R11 devices operate as the R1 device for two R2 devices. Links to both R2 devices are active.
R11 devices typically occur in 3-site concurrent configurations where data on the R11 site is mirrored to two secondary (R2)
arrays:
Site B
Target
R2
Site C
R11
Target
Site A
Source
R2
Remote replication 79
R21 devices
R21 devices have a dual role and are used in cascaded 3-site configurations where:
● Data on the R1 site is synchronously mirrored to a secondary (R21) site, and then
● Asynchronously mirrored from the secondary (R21) site to a tertiary (R2) site:
Production
host
SRDF Links
R1 R21 R2
The R21 device acts as a R2 device that receives updates from the R1 device, and as a R1 device that sends updates to the R2
device.
When the R1->R21->R2 SRDF relationship is established, no host has write access to the R21 device.
In arrays that run Enginuity, the R21 device can be diskless. That is, it consists solely of cache memory and does not have any
associated storage device. It acts purely to relay changes in the R1 device to the R2 device. This capability requires the use of
thick devices. Systems that run PowerMaxOS or HYPERMAX OS contain thin devices only, so setting up a diskless R21 device is
not possible on arrays running those environments.
R22 devices
R22 devices:
● Have two R1 devices, only one of which is active at a time.
● Are typically used in cascaded SRDF/Star and concurrent SRDF/Star configurations to decrease the complexity and time
required to complete failover and failback operations.
● Enables recovery to occur without removing old SRDF pairs and creating new ones.
80 Remote replication
Dynamic device personalities
SRDF devices can dynamically swap “personality” between R1 and R2. After a personality swap:
● The R1 in the device pair becomes the R2 device, and
● The R2 becomes the R1 device.
Swapping R1/R2 personalities allows the application to be restarted at the remote site without interrupting replication if an
application fails at the production site. After a swap, the R2 side (now R1) can control operations while being remotely mirrored
at the primary (now R2) site.
An R1/R2 personality swap is not supported:
● If the R2 device is larger than the R1 device.
● If the device to be swapped is participating in an active SRDF/A session.
● In SRDF/EDP topologies diskless R11 or R22 devices are not valid end states.
● If the device to be swapped is the target device of any TimeFinder or Dell EMC Compatible flash operations.
Synchronous mode
Synchronous mode maintains a real-time mirror image of data between the R1 and R2 devices over distances up to 200 km (125
miles). Host data is written to both arrays in real time. The application host does not receive the acknowledgment until the data
has been stored in the cache of both arrays.
Asynchronous mode
Asynchronous mode maintains a dependent-write consistent copy between the R1 and R2 device over unlimited distances. On
receiving data from the application host, SRDF on the R1 side of the link writes that data to its cache. Also it batches the
data received into delta sets. Delta sets are transferred to the R2 device in timed cycles. The application host receives the
acknowledgment once data is successfully written to the cache on the R1 side.
Remote replication 81
SRDF groups
An SRDF group defines the logical relationship between SRDF devices and directors on both sides of an SRDF link.
Group properties
The properties of an SRDF group are:
● Label (name)
● Set of ports on the local array used to communicate over the SRDF links
● Set of ports on the remote array used to communicate over the SRDF links
● Local group number
● Remote group number
● One or more pairs of devices
The devices in the group share the ports and associated CPU resources of the port's directors.
Types of group
There are two types of SRDF group:
● Static: which are defined in the local array's configuration file.
● Dynamic: which are defined using SRDF management tools and their properties that are stored in the array's cache memory.
On arrays running PowerMaxOS or HYPERMAX OS all SRDF groups are dynamic.
NOTE: Two or more SRDF links per SRDF group are required for redundancy and fault tolerance.
The relationship between the resources on a director (CPU cores and ports) varies depending on the operating environment.
82 Remote replication
SRDF consistency
Many applications, especially database systems, use dependent write logic to ensure data integrity. That is, each write operation
must complete successfully before the next can begin. Without write dependency, write operations could get out of sequence
resulting in irrecoverable data loss.
SRDF implements write dependency using the consistency group (also known as SRDF/CG). A consistency group consists of a
set of SRDF devices that use write dependency. For each device in the group, SRDF ensures that write operations propagate to
the corresponding R2 devices in the correct order.
However, if the propagation of any write operation to any R2 device in the group cannot complete, SRDF suspends propagation
to all group's R2 devices. This suspension maintains the integrity of the data on the R2 devices. While the R2 devices are
unavailable, SRDF continues to store write operations on the R1 devices. It also maintains a list of those write operations in
their time order. When all R2 devices in the group become available, SRDF propagates the outstanding write operations, in the
correct order, for each device in the group.
SRDF/CG is available for both SRDF/S and SRDF/A.
Data migration
Data migration is the one-time movement of data from one array to another. Once the movement is complete, the data is
accessed from the secondary array. A common use of migration is to replace an older array with a new one.
Dell EMC support personnel can assist with the planning and implementation of migration projects.
SRDF multisite configurations enable migration to occur in any of these ways:
● Replace R2 devices.
● Replace R1 devices.
● Replace both R1 and R2 devices simultaneously.
For example, this diagram shows the use of concurrent SRDF to replace the secondary (R2) array in a 2-site configuration:
Remote replication 83
Array A Array B
R1 R2
R11 R2 R1
SRDF
migration
R2
R2
Array C Array C
Here:
● The top section of the diagram shows the original, 2-site configuration.
● The lower left section of the diagram shows the interim, 3-site configuration with data being copied to two secondary arrays.
● The lower right section of the diagram shows the final, 2-site configuration where the new secondary array has replaced the
original one.
The Dell EMC SRDF Introduction contains more information about using SRDF to migrate data.
More information
Here are other Dell EMC documents that contain more information about the use of SRDF in replication and migration:
SRDF Introduction
SRDF and NDM Interfamily Connectivity Information
SRDF/Cluster Enabler Plug-in Product Guide
Using the Dell EMC Adapter for VMWare Site Recovery Manager Technical Book
Dell EMC SRDF Adapter for VMware Site Recovery Manager Release Notes
84 Remote replication
SRDF/Metro
In traditional SRDF configurations, only the R1 devices are Read/Write accessible to the application hosts. The R2 devices are
Read Only and Write Disabled.
In SRDF/Metro configurations, however:
● Both the R1 and R2 devices are Read/Write accessible to the application hosts.
● Application hosts can write to both the R1 and R2 side of the device pair.
● R2 devices assume the same external device identity as the R1 devices. The identity includes the device geometry and
device WWN.
This shared identity means that R1 and R2 devices appear to application hosts as a single, virtual device across two arrays.
Deployment options
SRDF/Metro can be deployed in either a single, multipathed host environment or in a clustered host environment:
Multi-Path Cluster
Read/Write Read/Write
Read/Write Read/Write
SRDF/Metro Resilience
If either of the devices in a SRDF/Metro configuration become Not Ready, or connectivity between the devices is lost, SRDF/
Metro must decide which side remains available to the application host. There are two mechanisms that SRDF/Metro can use :
Device Bias and Witness.
Device Bias
Device pairs for SRDF/Metro are created with a bias attribute. By default, the create pair operation sets the bias to the R1
side of the pair. That is, if a device pair becomes Not Ready (NR) on the SRDF link, the R1 (bias side) remains accessible
to the hosts, and the R2 (nonbias side) becomes inaccessible. However, if there is a failure on the R1 side, the host loses all
connectivity to the device pair. The Device Bias method cannot make the R2 device available to the host.
Witness
A witness is a third party that mediates between the two sides of a SRDF/Metro pair to help:
● Decide which side remains available to the host
Remote replication 85
● Avoid a "split brain" scenario when both sides attempt to remain accessible to the host despite the failure
The witness method allows for intelligently choosing on which side to continue operations when the bias-only method may not
result in continued host availability to a surviving, nonbiased array.
There are two forms of the Witness mechanism:
● Array Witness: The operating environment of a third array is the mediator.
● Virtual Witness (vWitness): A daemon running on a separate, virtual machine is the mediator.
When both sides run PowerMaxOS 5978 SRDF/Metro takes these criteria into account when selecting the side to remain
available to the hosts (in priority order):
1. The side that has connectivity to the application host (requires PowerMaxOS 5978.444.444or later)
2. The side that has a SRDF/A DR leg
3. Whether the SRDF/A DR leg is synchronized
4. The side that has more than 50% of the RA or FA directors that are available
5. The side that is currently the bias side
The first of these criteria that one array has, and the other does not, stops the selection process. The side with the matched
criteria is the preferred winner.
86 Remote replication
Disaster recovery facilities
Devices in SRDF/Metro groups can simultaneously be in other groups that replicate data to a third, disaster recovery site. There
are two replication solutions. The solutions available in any SRDF/Metro configuration depends on the version of the operating
environment that the participating arrays run:
● Highly-available disaster recovery – in configurations that consist of arrays that run PowerMaxOS 5978.669.669 and later
● Independent disaster recovery – in configurations that run all supported versions of PowerMaxOS 5978 and HYPERMAX OS
5977
SRDF/Metro
R11 R21
Array A Array B
SRDF/A or SRDF/A or
Adaptive Copy Adaptive Copy
Disk Disk
Active link
Inactive link
R22
Array C
Notice that the device names differ from a standard SRDF/Metro configuration. This difference reflects the change in the
device functions when SRDF/Metro Smart DR is in operation. For instance, the R1 side of the SRDF/Metro on Array A now has
the name R11, because it is the R1 device to both the:
● R21 device on Array B in the SRDF/Metro configuration
● R22 device on Array C in the SRDF/Metro Smart DR configuration
Arrays A and B both have SRDF/Asynchronous or Adaptive Copy Disk connections to the DR array (Array C). However, only
one of those connections is active at a time (in this example the connection between Array A and Array C). The two SRDF/A
connections are known as the active and standby connections.
If a problem prevents Array A replicating data to Array C, the standby link between Array B and Array C becomes active and
replication continues. Array A and Array B keep track of the data replicated to Array C to enable replication and avoid data loss.
Remote replication 87
Independent disaster recovery
Devices in SRDF/Metro groups can simultaneously be part of device groups that replicate data to a third, disaster-recovery site.
Either or both sides of the Metro region can be replicated. An organization can choose which ever configuration that suits its
business needs. The following diagram shows the possible configurations:
NOTE: When the SRDF/Metro session is using a witness, the R1 side of the Metro pair can change based on the witness
determination of the preferred side.
Single-sided replication
SRDF/Metro SRDF/Metro
R11 R2 R1 R21
SRDF/A SRDF/A
or Adaptive Copy or Adaptive Copy
Disk Disk
R2 R2
Site C Site C
Double-sided replication
SRDF/Metro SRDF/Metro
R2
R2 R2
R2
The device names differ from a stand-alone SRDF/Metro configuration. This difference reflects the change in the devices'
function when disaster recovery facilities are in place. For instance, when the R2 side is replicated to a disaster recovery site, its
name changes to R21 because it is both the:
● R2 device in the SRDF/Metro configuration
● R1 device in the disaster-recovery configuration
When an SRDF/Metro uses a witness for resilience protection, the two sides periodically renegotiate the winning and losing
sides. If the winning and losing sides switch as a result of renegotiation:
88 Remote replication
● An R11 device becomes an R21 device. That device was the R1 device for both the SRDF/Metro and disaster recovery
configurations. Now the device is the R2 device of the SRDF/Metro configuration but it remains the R1 device of the
disaster recovery configuration.
● An R21 device becomes and R11 device. That device was the R2 device in the SRDF/Metro configuration and the R1 device
of the disaster recovery configuration. Now the device is the R1 device of both the SRDF/Metro and disaster recovery
configurations.
More information
Here are other Dell EMC documents that contain more information on SRDF/Metro:
SRDF Introduction
SRDF/Metro vWitness Configuration Guide
SRDF Interfamily Connectivity Information
RecoverPoint
RecoverPoint is a comprehensive data protection solution designed to provide production data integrity at local and remote
sites. RecoverPoint also provides the ability to recover data from a point in time using journaling technology.
The primary reasons for using RecoverPoint are:
● Remote replication to heterogeneous arrays
● Protection against Local and remote data corruption
● Disaster recovery
● Secondary device repurposing
● Data migrations
RecoverPoint systems support local and remote replication of data that applications are writing to SAN-attached storage.
The systems use existing Fibre Channel infrastructure to integrate seamlessly with existing host applications and data storage
subsystems. For remote replication, the systems use existing Fibre Channel connections to send the replicated data over a
WAN, or use Fibre Channel infrastructure to replicate data aysnchronously. The systems provide failover of operations to a
secondary site in the event of a disaster at the primary site.
Previous implementations of RecoverPoint relied on a splitter to track changes made to protected volumes. The current
implementation relies on a cluster of RecoverPoint nodes, provisioned with one or more RecoverPoint storage groups, leveraging
SnapVX technology, on the storage array. Volumes in the RecoverPoint storage groups are visible to all the nodes in the cluster,
and available for replication to other storage arrays.
RecoverPoint allows data replication of up to 8,000 LUNs for each RecoverPoint cluster and up to eight different RecoverPoint
clusters attached to one array. Supported array types include PowerMax, VMAX All Flash, VMAX3, VMAX, VNX, VPLEX, and
XtremIO.
RecoverPoint is licensed and sold separately. For more information about RecoverPoint and its capabilities see the Dell EMC
RecoverPoint Product Guide.
Remote replication 89
Remote replication using eNAS
File Auto Recovery (FAR) allows you to manually failover or move a virtual Data Mover (VDM) from a source eNAS system to
a destination eNAS system. The failover or move leverages block-level SRDF synchronous replication, so it incurs zero data loss
in the event of an unplanned operation. This feature consolidates VDMs, file systems, file system checkpoint schedules, CIFS
servers, networking, and VDM configurations into their own separate pools. This feature works for a recovery where the source
is unavailable. For recovery support in the event of an unplanned failover, there is an option to recover and clean up the source
system and make it ready as a future destination.
90 Remote replication
10
Cloud Mobility
This chapter introduces cloud mobility.
Topics:
• Cloud Mobility for Dell EMC PowerMax
Management of Cloud Mobility is performed on the Cloud Mobility Dashboard in Unisphere for PowerMax. Through the
dashboard you can view cloud system alerts, configure and manage cloud snapshot policies, and view performance metrics
for the cloud provider, as well as other operations. For details see the Unisphere Online Help.
Cloud Mobility 91
PowerMax Cloud Mobility functionality allows you to move snapshots off the storage system and on to the cloud. Snapshots can
also be restored back to the original storage system.
For more information see the Cloud Mobility for Dell EMC PowerMax White Paper.
92 Cloud Mobility
11
Blended local and remote replication
This chapter introduces TimeFinder integration with SRDF.
Topics:
• Integration of SRDF and TimeFinder
• R1 and R2 devices in TimeFinder operations
• SRDF/AR
• TimeFinder and SRDF/A
• TimeFinder and SRDF/S
NOTE: Some TimeFinder operations are not supported on devices that SRDF protects. The Dell EMC Solutions Enabler
TimeFinder SnapVX CLI User Guide has further information.
The rest of this chapter summarizes the ways of integrating SRDF and TimeFinder.
SRDF/AR
SRDF/AR combines SRDF and TimeFinder to provide a long-distance disaster restart solution. SRDF/AR can be deployed over 2
or 3 sites:
● In 2-site configurations, SRDF/DM is deployed with TimeFinder.
● In 3-site configurations, SRDF/DM is deployed with a combination of SRDF/S and TimeFinder.
The time to create the new replicated consistent image is determined by the time that it takes to replicate the deltas.
Host Host
SRDF
TimeFinder
TimeFinder
background copy
R1 R2
Site A Site B
Figure 20. SRDF/AR 2-site solution
In this configuration, data on the SRDF R1/TimeFinder target device is replicated across the SRDF links to the SRDF R2 device.
The SRDF R2 device is also a TimeFinder source device. TimeFinder replicates this device to a TimeFinder target device. You
can map the TimeFinder target device to the host connected to the secondary array at Site B.
In a 2-site configuration, SRDF operations are independent of production processing on both the primary and secondary arrays.
You can utilize resources at the secondary site without interrupting SRDF operations.
Use SRDF/AR 2-site configurations to:
● Reduce required network bandwidth using incremental resynchronization between the SRDF target sites.
● Reduce network cost and improve resynchronization time for long-distance SRDF implementations.
Host Host
R1 R2
SRDF adaptive TimeFinder
SRDF/S TimeFinder copy
R2
R1
If Site A (primary site) fails, the R2 device at Site B provides a restartable copy with zero data loss. Site C provides an
asynchronous restartable copy.
If both Site A and Site B fail, the device at Site C provides a restartable copy with controlled data loss. The amount of data loss
is a function of the replication cycle time between Site B and Site C.
SRDF and TimeFinder control commands to R1 and R2 devices for all sites can be issued from Site A. No controlling host is
required at Site B.
Use SRDF/AR 3-site configurations to:
● Reduce required network bandwidth using incremental resynchronization between the secondary SRDF target site and the
tertiary SRDF target site.
● Reduce network cost and improve resynchronization time for long-distance SRDF implementations.
● Provide disaster recovery testing, point-in-time backups, decision support operations, third-party software testing, and
application upgrade testing or the testing of new applications.
Requirements/restrictions
In a 3-site SRDF/AR multi-hop configuration, SRDF/S host I/O to Site A is not acknowledged until Site B has acknowledged it.
This can cause a delay in host response time.
Overview
Data migration is a one-time movement of data from one array (the source) to another array (the target). Typical examples are
data center refreshes where data is moved from an old array after which that array is retired or re-purposed. Data migration is
not data movement due to replication (where the source data is accessible after the target is created) or data mobility (where
the target is continually updated).
After a data migration operation, applications that access the data reference it at the new location.
To plan a data migration, consider the potential impact on your business, including the:
● Type of data to be migrated
● Site location(s)
● Number of systems and applications
● Amount of data to be moved
● Business needs and schedules
PowerMaxOS provides migration facilities for:
● Open systems
● IBM System i
● Mainframe
Data migration 97
Data migration for open systems
The data migration features available for open system environments are:
● Non-disruptive migration
● Open Replicator
● PowerPath Migration Enabler
● Data migration using SRDF/Data Mobility
● Space and zero-space reclamation
Non-Disruptive Migration
Non-Disruptive Migration (NDM) is a method for migrating data without application downtime. The migration takes place over a
metro distance, typically within a data center.
NDM Updates is a variant of NDM introduced in PowerMaxOS 5978.444.444. NDM Updates requires that the application
associated with the migrated data is shut down for part of the migration process. This is due to the fact that the NDM is heavily
dependent on the behavior of multipathing software to detect, enable, and disable paths none of which is under the control of
Dell EMC (except for supported products such as PowerPath). NDM is the term that covers both non-disruptive and disruptive
migration.
Starting with PowerMaxOS 5978 there are two implementations of NDM each for a different type of source array:
● Either:
○ PowerMax array running PowerMaxOS 5978
○ VMAX3 or VMAX All Flash array running HYPERMAX OS 5977.1125.1125 or later with an ePack
● VMAX array running Enginuity 5876 with an ePack
When migrating to a PowerMax array, these are the only configurations for the source array.
The SRDF Interfamily Connectivity Information lists the Service Packs and ePacks required for HYPERMAX OS 5977 and
Enginuity 5876. In addition, the NDM support matrix has information on array operating systems support, host support, and
multipathing support for NDM operations. The support matrix is available on the eLab Navigator.
Regulatory or business requirements for disaster recovery may require the use of replication to other arrays attached to source
array, the target array, or both using SRDF/S, during the migration. In this case, refer to the SRDF Interfamily Connectivity
Information for information on the Service Packs and the ePacks required for the SRDF/S configuration.
98 Data migration
Figure 22. Configuration of a VMAX3, VMAX All Flash or PowerMax migration
Process
Normal flow
The steps in the migration process that is normally followed are:
1. Set up the migration environment – configure the infrastructure of the source and target array, in preparation for data
migration.
2. On the source array, select a storage group to migrate.
3. If using NDM Updates, shut down the application associated with the storage group.
4. Create the migration session optionally specifying whether to move the identity of the LUNs in the storage group to the
target array – copy the content of the storage group to the target array using SRDF/Metro.
During this time the source and target arrays are both accessible to the application host.
5. When the data copy is complete:
a. If the migration session did not move the identity of the LUNs, reconfigure the application to access the new LUNs on
the target array.
b. Commit the migration session – remove resources from the source array and those used in the migration itself.
6. If using NDM Updates, restart the application.
7. To migrate further storage groups, repeat steps 2 on page 99 to 6 on page 99.
8. After migrating all the required storage groups, remove the migration environment.
Data migration 99
Alternate flow
There is an alternative process that pre-copies the data to the target array before making it available to the application host.
The steps in this process are:
1. Set up the migration environment – configure the infrastructure of the source and target array, in preparation for data
migration.
2. On the source array, select a storage group to migrate.
3. Use the precopy facility of NDM to copy the selected data to the target array.
Optionally, specify whether to move the identity of the LUNs in the storage group to the target array.
While the data copy takes place, the source array is available to the application host, but the target array is unavailable.
4. When the copying of the data is complete: use the Ready Target facility in NDM to make the target array available to the
application host also.
a. If the migration session did not move the identity of the LUNs, reconfigure the application to access the new LUNs on
the target array.
b. If using NDM Updates, restart the application.
c. Commit the migration session – remove resources from the source array and those used in the migration itself. The
application now uses the target array only.
5. To migrate further storage groups, repeat steps 2 on page 100 to 4 on page 100.
6. After migrating all the required storage groups, remove the migration environment.
Other functions
Other NDM facilities that are available for exceptional circumstances are:
● Cancel – to cancel a migration that has not yet been committed.
● Sync – to stop or start the synchronization of writes to the target array back to source array. When stopped, the application
runs on the target array only. Used for testing.
● Recover – to recover a migration process following an error.
Other features
Other features of migrating from VMAX3, VMAX All Flash, or PowerMax to PowerMax are:
● Data can be compressed during migration to the PowerMax array
● Allows for nondisruptive revert to the source array
● There can be up to 50 migration sessions in progress simultaneously
● Does not require an additional license as NDM is part of PowerMaxOS
● The connections between the application host and the arrays use FC; the SRDF connection between the arrays uses FC or
GigE
Devices and components that cannot be part of an NDM process are:
● CKD devices
● eNAS data
● Storage Direct and FAST.X relationships along with their associated data
Process
The steps in the migration process are:
1. Set up the environment – configure the infrastructure of the source and target array, in preparation for data migration.
2. On the source array, select a storage group to migrate.
3. If using NDM Updates, shut down the application associated with the storage group.
4. Create the migration session – copy the content of the storage group to the target array using SRDF.
When creating the session, optionally specify whether to move the identity of the LUNs in the storage group to the traget
array.
5. When the data copy is complete:
a. If the migration session did not move the identity of the LUNs, reconfigure the application to access the new LUNs on
the target array.
b. Cutover the storage group to the PowerMax array.
c. Commit the migration session – remove resources from the source array and those used in the migration itself. The
application now uses the target array only.
6. If using NDM Updates, restart the application.
7. To migrate further storage groups, repeat steps 2 on page 101 to 6 on page 101.
8. After migrating all the required storage groups, remove the migration environment.
Other features
Other features of migrating from VMAX to PowerMax are:
● Data can be compressed during migration to the PowerMax array
● Allows for nondisruptive revert to the source array
● There can be up to 50 migration sessions in progress simultaneously
● NDM does not require an additional license as it is part of PowerMaxOS
Storage arrays
● The eligible combinations of operating environments running on the source and target arrays are:
Source Targets
Management host
● Wherever possible, use a host system separate from the application host to initiate and control the migration (the control
host).
● The control host requires visibility of and access to both the source and target arrays.
Cold
The Control device is Not Ready (offline) to the host while the copy operation is in progress.
Pull
A pull operation copies data to the control device from the remote device(s).
Push
A push operation copies data from the control device to the remote device(s).
Pull operations
On arrays running PowerMaxOS, Open Replicator uses the Solutions Enabler SYMCLI symrcopy support for up to 4096 pull
sessions.
For pull operations, the volume can be in a live state during the copy process. The local hosts and applications can begin to
access the data as soon as the session begins, even before the data copy process has completed.
These features enable rapid and efficient restoration of remotely vaulted volumes and migration from other storage platforms.
Copy on First Access ensures the appropriate data is available to a host operation when it is needed. The following image shows
an Open Replicator hot pull.
SB15
SB12
SB13
SB10
SB11
PiT
SB8
SB9
SB6
SB7
Copy
SB4
SB5
SB2
SB3
SB0
SB1
PS0 PS1 PS2 PS3 PS4 SMB0 SMB1
STD
STD
PiT
Copy
The pull can also be performed in cold mode to a static volume. The following image shows an Open Replicator cold pull.
SB14
SB15
SB12
SB13
SB10
SB11
SB8
SB9
STD
SB6
SB7
SB4
SB5
SB2
SB3
SB0
SB1
PS0 PS1 PS2 PS3 PS4 SMB0 SMB1
Target
STD
Target
Target
STD
Disaster Recovery
When the control array runs PowerMaxOS it can also be the R1 side of a SRDF configuration. That configuration can use
SRDF/A, SRDF/S, or Adaptive Copy Mode to provide data protection during and after the data migration.
Volume level data migration facilities move logical volumes in their entirety. z/OS Migrator volume migration is performed on a
track for track basis without regard to the logical contents of the volumes involved. Volume migrations end in a volume swap
which is entirely non-disruptive to any applications using the data on the volumes.
Volume migrator
Volume migration provides host-based services for data migration at the volume level on mainframe systems. It provides
migration from third-party devices to devices on Dell EMC arrays as well as migration between devices on Dell EMC arrays.
Volume mirror
Volume mirroring provides mainframe installations with volume-level mirroring from one device on a Dell EMC array to another. It
uses host resources (UCBs, CPU, and channels) to monitor channel programs scheduled to write to a specified primary volume
and clones them to also write to a specified target volume (called a mirror volume).
After achieving a state of synchronization between the primary and mirror volumes, Volume Mirror maintains the volumes in a
fully synchronized state indefinitely, unless interrupted by an operator command or by an I/O failure to a Volume Mirror device.
Mirroring is controlled by the volume group. Mirroring may be suspended consistently for all volumes in the group.
Introduction
ODE enables a storage administrator to provide more capacity on a storage device while it remains online to its application.
This particularly benefits organizations where applications need to remain available permanently. If a device associated with an
application runs low on space, the administrator can increase its capacity without affecting the availability and performance of
the application.
Standalone devices, devices in a SRDF configuration and those in an LREP configuration can all be expanded using ODE.
General features
Features of ODE that are applicable to stand-alone, SRDF, and LREP devices are:
● ODE is available for both FBA and CKD devices.
● ODE operates on thin devices (TDEVs).
● A device can be expanded to a maximum capacity of 64 TB (1,182,006 cylinders for a CKD device).
● A device can expand only.
There are no facilities for reducing the capacity of a device.
● During expansion, a device is locked.
This prevents operations such as adding a device to an SRDF configuration until the expansion is complete.
● An administrator can expand the capacity of multiple devices using one management operation.
A thin device presents a given capacity to the host, but consumes only the physical storage necessary to hold the data that the
host has written to the device (Thin devices (TDEVs) on page 57 has more information). Increasing the capacity of a device
using ODE does not allocate any additional physical storage. Only the configured capacity of the device as seen by the host
increases.
Failure of an expansion operation for a stand-alone, SRDF, or LREP device may occur because:
● The device does not exist.
● The device is not a TDEV.
● The requested capacity is less than the current capacity.
● The requested capacity is greater than 64 TB.
● There is insufficient space in the storage system for expansion.
● There are insufficient PowerMax internal resources to accommodate the expanded device.
● Expanding the device to the requested capacity would exceed the oversubscription ratio of the physical storage.
● A reclaim, deallocation, or free-all operation is in progress on the device.
Standalone devices
The most basic form of device expansion is of a device that is associated with a host application and is not part of a SRDF or
LREP configuration. Additional features of ODE in this environment are:
● ODE can expand vVols in addition to TDEVs.
vVolS are treated as a special type of TDEV.
● ODE for a standalone device is available in PowerMaxOS 5978, HYPERMAX OS 5977.691.684 or later (for FBA devices), and
HYPERMAX OS 5977.1125.1125 or later (for CKD devices).
Each expansion operation returns a status that indicates whether the operation succeeded or not. The status of an operation to
expand multiple devices can indicate a partial success. In this case at least one of the devices was successfully expanded but
one or more others failed.
Another reason why an expansion operation might fail is if the device is not a vVol.
SRDF devices
PowerMaxOS 5978 introduces online device expansion for SRDF configurations. The administrator can expand the capacity of
thin devices in an SRDF relationship without any service disruption in a similar way to expanding stand-alone devices.
Devices in an asynchronous, synchronous, adaptive copy mode, SRDF/Metro, SRDF/Star (mainframe only), or SRDF/SQAR
(mainframe only) configuration are all eligible for expansion. However, this feature is not available in RecoverPoint, Storage
Direct, NDM, or NDM Updates configurations.
Also, device expansion is available only on storage arrays in an SRDF configuration that run PowerMaxOS (PowerMaxOS
5978.444.444 or later for SRDF/Metro) on both sides. Any attempt to expand an SRDF device in a system that runs an older
operating environment fails.
Other features of ODE in an SRDF environment are for expanding:
● An individual device on either the R1 or R2 side
● An R1 device and its corresponding device on the R2 side in one operation
● A range of devices on either the R1 or R2 side
● A range of devices on the R1 side and their corresponding devices on the R2 side in one operation
● A storage group on either the R1 or R2 side
● A storage group on the R1 side and its corresponding group on the R2 side in one operation
NOTE: An SRDF/Metro configuration does not allow the expansion of devices on one side only. Both sides, whether it is a
device, a range of devices, or a storage group, must be expanded in one operation.
Basic rules of device expansion are:
● The R1 side of an SRDF pair cannot be larger than the R2 side.
● In an SRDF/Metro configuration, both sides must be the same size.
When both sides are available on the SRDF link, Solutions Enabler, Mainframe Enablers, and Unisphere (the tools for managing
ODE) enforce these rules. When either device is not available on the SRDF link, the management tools allow you to make the R1
larger than the R2. However, before the devices can be made available on the link, the capacity of the R2 must increase to at
least the capacity of the R1 device.
Similar considerations apply to multiple site configurations:
● Cascaded SRDF: The size of R1 must be less than or equal to the size of R21. The size of R21 must be less than or equal to
the size of R2.
● Concurrent SRDF: The size of R11 must be less than or equal to the size of both R2 devices.
Other reasons why an expansion operation may fail in an SRDF environment are:
● One or more of the devices is on a storage system that does not run PowerMaxOS 5978 (or PowerMaxOS 5978.444.444 or
later for SRDF/Metro).
● One or more of the devices is a vVol.
● One or more devices are part of a Storage Direct, RecoverPoint, NDM, or MDM configuration.
LREP devices
PowerMaxOS 5978 also introduces online device expansion for LREP (local replication) configurations. As with stand-alone and
SRDF devices, this means that an administrator can increase the capacity of thin devices that are part of an LREP relationship
without any service disruption. Devices eligible for expansion are those that are part of:
● SnapVX sessions
● Legacy sessions that use CCOPY, SDDF, or Extent
ODE is not available for:
● SnapVX emulations such as Clone, TimeFinder Clone, TimeFinder/Mirror, TimeFinder/Snap, and VP Snap
● RecoverPoint and Storage Direct devices
● vVols
● PPRC
This is to maintain compatibility with the limitations that IBM place on expanding PPRC devices.
By extension, ODE is not available for a product that uses any of these technologies. For example, it is not available for Remote
Pair FlashCopy since that uses PPRC.
Other ODE features in an LREP environment are:
● Expand Snap VX source or target devices.
● Snapshot data remains the same size.
● The ability to restore a smaller snapshot to an expanded source device.
● Target link and relink operations are dependent on the size of the source device when the snapshot was taken not its size
after expansion.
There are additional reasons for an ODE operation to fail in an LREP environment. For instance, when the LREP configuration
uses one of the excluded technologies.
Solutions Enabler
Use the symdev modify command in Solutions Enabler to expand one or more devices. Some features of this command are:
● Use the -cap option to specify the new capacity for the devices.
Use the -captype option with -cap to specify the units of the new capacity. The available units are cylinders, MB, GB, and
TB.
● Use the -devs option to define the devices to expand. The argument for this option consists of a single device identifier, an
range of device identifiers, or a list of identifiers. Each element in the list can be a single device identifier or a range of device
identifiers.
● Use the -rdfg option to specify the SRDF group of the devices to be expanded. Inclusion of this option indicates that both
sides of the SRDF pair associated with the group are to be expanded in a single operation.
The Dell EMC Solutions Enabler Array Controls and Management CLI User Guide has details of the symdev modify command,
its syntax and its options.
Examples:
● Expand a single device on array 005 to a capacity of 4TB:
symdev modify -sid 85 -tdev -cap 1000 -captype mb -dev 007D2:007D5 -v -rdfg 33 –nop
Unisphere
Unisphere provides facilities to increase the capacity of a device, a range of devices (FBA only), and SRDF pairs. The available
units for specifying the new device capacity are cylinders, GB, and TB. The Unisphere Online Help has details on how to select
and expand devices.
For example, this is the dialog for expanding a standalone device:
Mainframe Enablers
Mainframe Enablers provides the DEV,EXPAND command in the Symmetrix Control Facility (SCF) to increase the capacity of a
device. Some features of this command are:
● Use the DEVice parameter to specify a single device or a range of devices to expand.
● Use the CYLinders parameter to specify the new capacity of the devices, in cylinders.
● Use the RDFG parameter to specify the SRDF group associated with the devices and so expand the R1 and R2 devices in a
single operation.
The Dell EMC Mainframe Enablers ResourcePak Base for z/OS Product Guide has details of the DEV,EXPAND command and
its parameters.
Example:
Expand device 8013 to 1150 cylinders:
DEV,EXPAND,DEV(8013),CYL(1150)
The higher a role is in the hierarchy the more permissions, and hence capabilities, it has.
Monitor
The Monitor role allows a user to use show, list, and view operations to monitor a system.
Allowed operations
Examples of the operations that the Monitor role allows are:
● View array information
● View masking objects (storage groups, initiator groups, port groups, and masking views)
● View device information
● View the RBAC rules defined on this array.
This is available only when the Secure Reads policy is not in effect. Secure Reads policy on page 119 has more information
the Secure Reads policy and its management.
Prevented operations
The Monitor role does not allow the user to view:
● Security-related data such as array ACLs and the array's Audit Log file
● The RBAC roles defined on this system, when the Secure Reads policy is in effect
PerfMonitor
The PerfMonitor role allows a user to configure performance alerts and thresholds in Unisphere. The PerfMonitor role also has
the permissions of the Monitor role.
Auditor
The Auditor role allows a user to view the security settings on a system.
Prevented operations
The Auditor role does not allow the user to modify any security setting.
DeviceManage
The DeviceManage role allows a user to configure and manage devices.
Allowed operations
Examples of operations that the DeviceManage role allows are:
● Control operations on devices, such as Ready, Not-Ready, Free
● Configuration operations on devices, such as setting names, or setting flags
● Link, Unlink, Relink, Set-Copy, and Set-NoCopy operations on SnapVX link devices
● Restore operations to SnapVX source devices
This is available only when the user also has the LocalRep role.
When the role is restricted to one or more storage groups, it allows these operations on the devices in those groups only.
The DeviceManage role also has the permissions of the Monitor role.
Prevented operations
The DeviceManage role does not allow the user to create, expand, or delete devices. However, if the role is associated with a
storage group, those operations are allowed on the devices within the group.
LocalRep
The LocalRep role allows the user to carry out local replication using SnapVX, or the legacy operations of Snapshot, Clone, and
BCV.
Allowed operations
Examples of operations that the LocalRep role allows are:
● Create, manage, and delete SnapVX snapshots
For operations that result in changes to the contents of any device, the user may also need the DeviceManage role:
● SnapVX restore operations require both the LocalRep and DeviceManage roles.
● SnapVX Link, Unlink, Relink, Set-Copy, and Set-No_Copy operations require the DeviceManage role on the link devices and
the LocalRep role on the source devices.
When the role is restricted to one or more storage groups, it allows all these operation on the devices within those groups only.
The LocalRep role also has the permissions of the Monitor role.
RemoteRep
The RemoteRep role allows a user to carry out remote replication using SRDF.
Allowed operations
Examples of operations that the RemoteRep role allows are:
● Create, manage, and delete SRDF device pairs
When the role is restricted to storage groups, it allows these operations on devices within those groups only.
● Set attributes that are not associated with SRDF/A on a SRDF group
This is available only if the role is applied to the entire array.
When the role is restricted to one or more storage groups, it allows these operations on the devices in those groups only.
The RemoteRep role also has the permissions of the Monitor role.
Prevented operations
The RemoteRep role does not allow the user to:
● Create and delete SRDF groups
● Set attributes that are not associated with SRDF/A on a SRDF group when the role is restricted to a set of storage groups
StorageAdmin
The StorageAdmin role allows a user to perform any storage operation, except those related to security.
Allowed operations
Examples of operations that the StorageAdmin role allows are:
● Perform array configuration operations
● Provision storage
● Delete storage
● Create, modify, and delete masking objects (storage groups, initiator groups, port groups, and masking views)
● Create and delete Secure SnapVX Snapshots
● Any operation allowed for the LocalRep, RemoteRep, and DeviceManage roles
This role also has the permissions of the LocalRep, RemoteRep, DeviceManage, and Monitor roles.
SecurityAdmin
The SecurityAdmin role allows a user to view and modify the system security settings.
Allowed operations
Operations that the SecurityAdmin role allows are:
● Modify the array's ACL settings
● Modify the RBAC rules and settings
The SecurityAdmin role also has the permissions of the Auditor and Monitor roles.
In force
Users with the Admin, SecurityAdmin, or Auditor roles can view all RBAC rules on the array. All other users can only see the
rules that either apply to them, or that assign a role of Admin or SecurityAdmin to someone.
Not in force
All users, no mater what role they have, can view all RBAC rules in the array. This is the default setting for the policy.
Policy management
Both the Solutions Enabler SYMCLI and Unisphere provide facilities for controlling whether the policy is in force.
Lockbox
Solutions Enabler uses a Lockbox to store and protect sensitive information. The Lockbox is associated with a particular host.
This association prevents the Lockbox from being copied to a second host and used to obtain access.
The Lockbox is created at installation. During installation, the installer prompts the user to provide a password for the Lockbox,
or if no password is provided at installation, a default password is generated and used with the Stable System values (SSVs,
a fingerprint that uniquely identifies the host system). For more information about the default password, see Default Lockbox
password on page 120.
Client/server communications
All communications between client and hosts uses SSL to help ensure data security.
Environmental errors
The following table lists the environmental errors in SIM format for PowerMaxOS 5978 or later.
Operator messages
Error messages
On z/OS, SIM messages are displayed as IEA480E Service Alert Error messages. They are formatted as shown below:
Figure 30. z/OS IEA480E service alert error message format (Disk Adapter failure)
Figure 31. z/OS IEA480E service alert error message format (SRDF Group lost/SIM presented against unrelated
resource)
Event messages
The storage array also reports events to the host and to the service processor. These events are:
● The mirror-2 volume has synchronized with the source volume.
● The mirror-1 volume has synchronized with the target volume.
● Device resynchronization process has begun.
On z/OS, these events are displayed as IEA480E Service Alert Error messages. They are formatted as shown below:
Figure 32. z/OS IEA480E service alert error message format (mirror-2 resynchronization)
Figure 33. z/OS IEA480E service alert error message format (mirror-1 resynchronization)
eLicensing
Arrays running PowerMaxOS use Electronic Licenses (eLicenses).
NOTE: For more information on eLicensing, refer to Dell EMC Knowledgebase article 335235 on the Dell EMC Online
Support website.
You obtain license files from Dell EMC Online Support, copy them to a Solutions Enabler or a Unisphere host, and push them out
to your arrays. The following figure illustrates the process of requesting and obtaining your eLicense.
NOTE: To install array licenses, follow the procedure described in the Solutions Enabler Installation Guide and Unisphere
Online Help.
Each license file fully defines all of the entitlements for a specific system, including the license type and the licensed capacity.
To add a feature or increase the licensed capacity, obtain and install a new license file.
Most array licenses are array-based, meaning that they are stored internally in the system feature registration database on the
array. However, there are a number of licenses that are host-based.
126 Licensing
Array-based eLicenses are available in the following forms:
● An individual license enables a single feature.
● A license suite is a single license that enables multiple features. License suites are available only if all features are enabled.
● A license pack is a collection of license suites that fit a particular purpose.
To view effective licenses and detailed usage reports, use Solutions Enabler, Unisphere, Mainframe Enablers, Transaction
Processing Facility (TPF), or IBM i platform console.
Capacity measurements
Array-based licenses include a capacity licensed value that defines the scope of the license. The method for measuring this
value depends on the license's capacity type (Usable or Registered).
Not all product titles are available in all capacity types, as shown below.
Usable capacity
Usable Capacity is defined as the amount of storage available for use on an array. The usable capacity is calculated as the sum
of all Storage Resource Pool (SRP) capacities available for use. This capacity does not include any external storage capacity.
Registered capacity
Registered capacity is the amount of user data managed or protected by each particular product title. It is independent of the
type or size of the disks in the array.
The methods for measuring registered capacity depends on whether the licenses are part of a bundle or individual.
Licensing 127
Open systems licenses
This section details the licenses available in an open system environment.
License packages
This table lists the license packages available in an open systems environment.
128 Licensing
Table 17. PowerMax license packages (continued)
License suite Includes Allows you to With the command
Set the dynamic-SRDF
capable attribute on
devices
Create SAVE devices
Licensing 129
Table 17. PowerMax license packages (continued)
License suite Includes Allows you to With the command
SRDF/Metro ● Place new SRDF device
pairs into an SRDF/
Metro configuration.
● Synchronize device
pairs.
SRM Automate storage
provisioning and reclamation
tasks to improve operational
efficiency.
Individual licenses
These items are available for arrays running PowerMaxOS and are not in any of the license suites:
Ecosystem licenses
These licenses do not apply to arrays:
Events and Retention Suite ● Protect data from unwanted changes, deletions and
malicious activity.
● Encrypt data where it is created for protection anywhere
outside the server.
● Maintain data confidentiality for selected data at rest and
enforce retention at the file-level to meet compliance
requirements.
● Integrate with third-party anti-virus checking, quota
management, and auditing applications.
130 Licensing
Table 20. PowerMax Mainframe software packaging options (PowerMax 8000 only)
zEssentials zEssentials zPro package zPro package
Feature Notes
package include package options included options
PowerMaxOS Yes Yes
Embedded Includes Unisphere
Management Yes Yes for PowerMax
REST APIs, SMI-S
Local Replication Includes
TimeFinder
SnapVX,
Yes Yes
Compatible Flash
(FlashCopy
support)
Mainframe Includes
Essentials Compatible High
Performance
FICON (zHPF) and
Yes Yes
Compatible PAV
(Dynamic, Hyper,
and SuperPAV)
support
Remote Replication Includes
Suite a SRDF/S/A/STAR,
Yes Yes Mirror Optimizer,
Compatible Peer
(PPRC)
Unisphere 360 Yes Yes
AutoSwap Yes Yes
D@RE b Yes Yes
zDP Yes Yes
Mainframe zBoost PAV
Yes Yes
Essentials Plus Optimizer
GDDR c Yes Yes
a. Software packages include software licensing. Order any additional required hardware separately.
b. Factory configured. Must be enabled during the ordering process.
c. Use of SRDF/STAR for mainframe requires GDDR.
Licensing 131
Index
C
cloud mobility 91
G
Global mirror 52
S
Snapshot policy 71
T
TCT environment 53
Transparent Cloud Tiering 53