Managing Replication Solutions v8.x
Managing Replication Solutions v8.x
TSI2564
© Hitachi Data Systems Corporation 2015. All rights reserved. HITACHI is a trademark or registered trademark of Hitachi, Ltd. Innovate With Information is a trademark or
registered trademark of Hitachi Data Systems Corporation. All other trademarks, service marks, and company names are properties of their respective owners.
ii
Contents
Introduction ........................................................................................................ ix
Welcome and Introductions .......................................................................................................................ix
Course Description ................................................................................................................................... x
Prerequisites ............................................................................................................................................ x
Course Objectives ....................................................................................................................................xi
Course Topics ..........................................................................................................................................xi
Learning Paths ........................................................................................................................................ xii
Resources: Product Documents ............................................................................................................... xiii
Collaborate and Share ............................................................................................................................ xiv
Social Networking — Academy’s Twitter Site .............................................................................................. xv
iii
Contents
Configuring the Environment .................................................................................................................. 2-4
Launching Hitachi Command Suite .......................................................................................................... 2-4
Registering Information Sources ............................................................................................................. 2-6
Refreshing Configuration from Information Sources .................................................................................. 2-8
Information Refresh in Replication Manager ............................................................................................. 2-9
Refreshing Information from Pair Management Servers .......................................................................... 2-10
Users and Permissions ......................................................................................................................... 2-12
Managing Users and User Permissions ................................................................................................... 2-12
Adding Users and Assigning Permissions................................................................................................ 2-14
Managing Security ............................................................................................................................... 2-14
Sites .................................................................................................................................................. 2-15
Sites Overview .................................................................................................................................... 2-15
Example of Two Data Centers — Use Case ............................................................................................ 2-16
Site Example ....................................................................................................................................... 2-18
Site Properties .................................................................................................................................... 2-18
Setting Up Sites .................................................................................................................................. 2-19
Resource Groups ................................................................................................................................. 2-21
Resource Groups Overview................................................................................................................... 2-21
Sites and Resource Group Relationship .................................................................................................. 2-23
Example of Two Data Centers – Use Case ............................................................................................. 2-24
Resource Group Function ..................................................................................................................... 2-26
Resource Groups ................................................................................................................................. 2-27
Resource Group Properties ................................................................................................................... 2-31
Instructor Demonstration ..................................................................................................................... 2-32
Module Summary ................................................................................................................................ 2-33
Module Review .................................................................................................................................... 2-34
iv
Contents
Module Summary ................................................................................................................................ 3-12
Module Review .................................................................................................................................... 3-12
v
Contents
vi
Contents
Hitachi Universal Replicator Hardware ..................................................................................................... 8-4
Hitachi Universal Replicator Components ................................................................................................. 8-5
Hitachi Universal Replicator Specifications................................................................................................ 8-6
Hitachi Universal Replicator Usage .......................................................................................................... 8-7
Base Journal (Initial Copy) ..................................................................................................................... 8-8
Update Journal (Update Copy) ................................................................................................................ 8-9
Journal Restore ................................................................................................................................... 8-10
Hitachi Universal Replicator Configurations ............................................................................................ 8-10
Three Data Center Configuration .......................................................................................................... 8-11
Hitachi Universal Replicator Operations ................................................................................................. 8-13
Setting Up Remote Paths ..................................................................................................................... 8-14
Setting Up Journal Groups.................................................................................................................... 8-16
Managing Pairs ................................................................................................................................... 8-22
Demonstration .................................................................................................................................... 8-24
Module Summary ................................................................................................................................ 8-25
Module Review .................................................................................................................................... 8-26
vii
Contents
Create Replica Wizard .......................................................................................................................... 10-9
Restoring Replicas ............................................................................................................................... 10-9
Restoring Replica ...............................................................................................................................10-10
Mounting or Unmounting Replica .........................................................................................................10-10
Module Summary ...............................................................................................................................10-11
Module Review ...................................................................................................................................10-11
Your Next Steps .................................................................................................................................10-12
viii
Introduction
Welcome and Introductions
Participant introductions
• Name
• Position
• Experience
• Your expectations
ix
Introduction
Course Description
Course Description
Prerequisites
Prerequisite Courses
• TSI2565 – Operating and Managing Hitachi Storage with Hitachi Command
Suite v8.x
Other Prerequisites
• Experience working with servers (Windows or UNIX)
• Understanding of basic storage/SAN concepts
x
Introduction
Course Objectives
Course Objectives
Course Topics
xi
Introduction
Learning Paths
Learning Paths
Available on:
• HDS.com (for customers)
• Partner Xchange (for partners)
• theLoop (for employees)
Customers
Partners
https://portal.hds.com/index.php?option=com_hdspartner&task=displayWebPage&menuName=
PX_PT_PARTNER_EDUCATION&WT.ac=px_rm_ptedu
Employees
http://loop.hds.com/community/hds_academy
Please contact your local training administrator if you have any questions regarding Learning
Paths or visit your applicable website.
xii
Introduction
Resources: Product Documents
• Google Search
Resource Library
http://www.hds.com/corporate/resources/?WT.ac=us_inside_rm_reslib
Google Search
• Document name
• Any key words about the product you are looking for
o If the key words are covered in the product documents, Google will find it the
resource
o For example, if you search Google for System Modes Options for VSP G1000, it is
covered in the user guide so the document will come up on Google
xiii
Introduction
Collaborate and Share
https://community.hds.com/welcome
http://loop.hds.com/community/hds_academy?view=overview
xiv
Introduction
Social Networking — Academy’s Twitter Site
Twitter site
Site URL: http://www.twitter.com/HDSAcademy
http://www.twitter.com/HDSAcademy
xv
Introduction
Social Networking — Academy’s Twitter Site
xvi
1. Hitachi Replication Manager Overview
Module Objectives
Page 1-1
Hitachi Replication Manager Overview
Customer Challenges
Customer Challenges
• Large organizations can dedicate resources for replication management, but smaller
organizations depend on Information Technology (IT) resources, which requires training
those individuals and adds more cost
Page 1-2
Hitachi Replication Manager Overview
Centralized Enterprise-wide Replication Management
Business
Universal
Copy-On-Write Thin Image ShadowImage TrueCopy Continuity
Replicator Manager
Primary Secondary
Provisioning Provisioning
CCI HORCM
For customers who leverage in-system or distance replication capabilities of their storage arrays,
Hitachi Replication Manager is the software tool that configures, monitors, and manages Hitachi
storage array-based replication products for both open systems and mainframe environment in
a way that simplifies and optimizes the:
• Configuration
• Operations
• Task management and automation
• Monitoring of the critical storage components of the replication infrastructure
Copy-on-Write = Hitachi Copy-on-Write Snapshot
Thin Image = Hitachi Thin Image
ShadowImage = Hitachi ShadowImage Heterogeneous Replication
TrueCopy = Hitachi TrueCopy Heterogeneous Remote Replication bundle
Universal Replicator = Hitachi Universal Replicator
Business Continuity Manager = Hitachi Business Continuity Manager
CCI = Command Control Interface
HORCM = Hitachi Open Remote Copy Manager (name of CCI executable)
Page 1-3
Hitachi Replication Manager Overview
Replication Manager Overview
Replication Manager
• Configures, monitors, and manages Hitachi replication products for open systems
and mainframe environments
• Replication configuration management
Enables users to set up all Hitachi replication products without requiring other
tools, for both local and remote storage systems
• Application aware backups
MS SQL Server and Exchange
• Multiple user design and role-based user access control
Achieves stringent access control for multiple users
• Task management
Allows scheduling and automation of the configuration of replicated data volume
pairs
Hitachi Replication Manager configures, monitors and manages Hitachi replication products on
both local and remote storage systems. For both open systems and mainframe environments,
Replication Manager simplifies and optimizes the configuration and monitoring, operations, task
management and automation for critical storage components of the replication infrastructure.
Users benefit from a uniquely integrated tool that allows them to better control recovery point
objectives (RPOs) and recovery time objectives (RTOs).
Page 1-4
Hitachi Replication Manager Overview
Graphical User Interface
Explorer menu
Dashboard menu
Object tree
Application area
Replication Manager GUI is consistent with other Hitachi Command Suite products:
• Global tasks bar area contains menus and action buttons for Replication Manager
functions, and also contains information about the logged-in user.
• Explorer menu is the Replication Manager operations menu. This menu comprises
multiple drawers with options. When a menu option is chosen, the appropriate
information is displayed in the navigation area and the application area.
• Dashboard menu displays a list of Hitachi Command Suite products on the same
management server. You can launch products using the GO link
• Object tree is a tree view displayed in the navigation area. Expand the tree for object
selection.
• Application area displays information for the item selected in the Explorer menu or
object tree.
Page 1-5
Hitachi Replication Manager Overview
Centralized Monitoring
Centralized Monitoring
Replication Manager provides the following four functional views that allow you to view pair
configurations and the status of the replication environment from different perspectives:
• Hosts view: This view lists open hosts and mainframe hosts and allows you to confirm
pair status summaries for each host.
• Storage Systems view: This view lists open and mainframe storage systems and
allows you to confirm pair status summarized for each. A storage system serving both
mainframe and open system pairs is recognized as two different resources to
differentiate open copy pairs and mainframe copy pairs.
• Pair Configurations view: This view lists open and mainframe hosts managing copy
pairs with CCI or BCM and allows you to confirm pair status summarized for each host.
This view also provides a tree structure along with the pair management structure.
• Applications view: This view lists the application and data protection status. This view
also provides a tree structure showing the servers and their associated objects (storage
groups, information stores, and mount points).
Page 1-6
Hitachi Replication Manager Overview
Centralized Monitoring
Storage Systems view — This view lists open and mainframe storage systems and allows you
to confirm pair status summarized for each. A storage system serving both mainframe and open
system pairs is recognized as two different resources to differentiate open copy pairs and
mainframe copy pairs.
Page 1-7
Hitachi Replication Manager Overview
Centralized Monitoring
Pair Configurations view — This view lists open and mainframe hosts managing copy pairs
with CCI or BCM and allows you to confirm pair status summarized for each host. This view also
provides a tree structure along with the pair management structure.
Applications view — This view lists the application and data protection status. This view also
provides a tree structure showing the servers and their associated objects (storage groups,
information stores, and mount points).
Page 1-8
Hitachi Replication Manager Overview
Centralized Monitoring
Replication Manager can send an alert when a monitored target, such as a copy pair or buffer,
satisfies a preset condition. The conditions that can be set include:
• Performance information
Alert notification is useful for enabling a quick response to a hardware failure or for determining
the cause of a degradation in transfer performance. Alert notifications are also useful for
preventing errors due to buffer overflow and insufficient copy licenses, thereby facilitating the
continuity of normal operation. Because you can receive alerts by email or SNMP traps, you can
also monitor the replication environment while you are logged out of Replication Manager.
Page 1-9
Hitachi Replication Manager Overview
Centralized Monitoring
Event logs
You can export Replication Manager management information to a file in CSV or HTML format.
Using the exported file, you can determine the cause of an error, establish corrective measures,
and analyze performance information. If necessary, you can edit the file or open it with another
application program. You can export a maximum of 20,000 data items at a time.
• Event logs
When you export management information, you can specify a time period to limit the amount
of information that will be exported. However, you can export only information whose data
retention period has not yet expired. The retention period can be managed by a user with the
Admin (Replication Manager management) permission.
Page 1-10
Hitachi Replication Manager Overview
Storage Systems View
The storage systems view provides information about LUNs (paired and unpaired), journal
groups, copy licenses, command devices and pools.
LUNs (paired) tab shows the list of LDEVs that are already configured as copy pairs.
• Clicking on a specific LUN provides detailed information about the copy pair, copy type,
pair status, and much more.
• A filter dialog is available for LUNs tab, which makes it easier to find target volumes.
You can filter LUNs by using attributes such as port, HSD, logical group, capacity, label
and copy type.
The Cmd Devs tab displays the command devices list configured on the storage systems.
The Pools tab displays detailed information for both Copy-on-Write, ThinImage and Dynamic
Provisioning pools.
The JNLGs tab displays a list of journal groups that are configured on the storage system.
The Remote Path tab displays the remote paths configured for TrueCopy and Universal
Replicator software.
The Copy Licenses tab displays the replication-related licenses that are installed on the
storage systems.
You can also manage (create, edit, delete) resources using the above tabs. Copy licenses for
program products need to be installed through the element manager for the storage system.
Page 1-11
Hitachi Replication Manager Overview
Features
Features
Copy groups: A group of copy pairs created for management purposes, as required by a
particular task or job. By specifying a copy group, you can perform operations such as changing
the pair status of multiple copy pairs at once. Using the My Copy Groups feature, a user can
register a copy group into My Copy Groups, choosing only those that are most important to
monitor to see how copy groups are related and check copy pair statuses in a single window.
My Copy Groups is also the default screen after you log in to the Replication Manager interface.
Sites: With Replication Manager, you can define logical sites in the GUI just as you would define
actual physical sites (actual data centers). It allows you to manage resources more efficiently if
you set up separate sites because it is easier to locate a required resource among many
resources displayed in the GUI.
Page 1-12
Hitachi Replication Manager Overview
Features
Page 1-13
Hitachi Replication Manager Overview
Positioning
Positioning
Replication
Monitoring and
Replication Manager Management
Configuration
Open Volumes
Navigator
Device Manager Management
Storage
Volumes
BC
M/F
Manager1
RAID Manager Replication
Management
Replication Manager provides monitoring for both enterprise storage systems (open and
mainframe volumes) and modular storage systems (open volumes)
Replication Manager requires, and is dependent on Hitachi Device Manager and uses RAID
Manager (CCI) and Device Manager agent for monitoring open volumes
• RAID Manager (CCI) is used by Replication Manager for watching pair status
For monitoring mainframe volumes, Replication Manager can work with or without Hitachi
Business Continuity Manager (BCM) software or Mainframe Agent.
• Replication Manager supports monitoring of IBM environments (z/OS, z/VM, z/VSE and
z/Linux) and non-IBM environments using only the Device Manager (without Business
Continuity Manager or Mainframe Agent installed). Replication Manager retrieves the
status of TCS/TCA/SI, and UR copy pairs directly from storage arrays, without
depending on mainframe host types. The minimum interval of automatic refresh for this
configuration is 30 minutes.
Diagram legend:
Page 1-14
Hitachi Replication Manager Overview
Positioning
Page 1-15
Hitachi Replication Manager Overview
Architecture – Open Systems and Mainframe
Host Agent
HRpM Agent
Agent Base
Manager
SNM2
Common
Plug-in
RAID
(CCI)
CMD
Browser HDvM Agent Device
Plug-in
Host Agent
Server HRpM Agent
Agent Base
Manager
Common
Plug-in
RAID
(CCI)
HRpM Server
HDvM Agent
SAN
FC-
Plug-in
HDvM Server
• Management Server: Replication Manager gets installed with Device Manager. HBase
is automatically installed by the Device Manager installation. It is highly recommended
to use the same version number, major and minor, for Device Manager server and
Replication Manager server.
o Host Agent: Only a single host agent is provided for the Device Manager and
Replication Manager. One agent install on the server works for Device Manager
and Replication Manager.
• Business Continuity Manager: This software product works on the mainframe and
manages replication pair volumes assigned for the mainframe computers. The
Replication Manager can monitor and manage the mainframe replication volumes by
communicating with the Business Continuity Manager.
Page 1-16
Hitachi Replication Manager Overview
Architecture – Open Systems and Mainframe
o IBM HTTP Server is required on Mainframe Host when using either of the
following:
o BCM program itself does not have above capabilities, so the IBM HTTP Server is
utilized to perform these functions. IBM HTTP Server works as “proxy server”
between HRpM and BCM.
Diagram legend:
Page 1-17
Hitachi Replication Manager Overview
Architecture – Open Systems with Application Agent
Architecture
g – Open Systems with Application Agent
Standard configuration of a site
Management Client Host (Application Server)
Host Agent
HRpM Agent
Agent Base
Manager
Common
Plug-in
RAID
(CCI)
Browser HDvM Agent Modular Storage
Plug-in
SNM2
CMD
Application Device
IP Network
Management Agent
Server
HRpM Server Host (Application
SAN
FC-
Backup/Import Server)
HDvM Server
Host Agent
Enterprise
Agent Base
HRpM Agent
Common
Manager
Plug-in Storage
RAID
(CCI)
HBase (*)
HDvM Agent S/N SVP
Plug-in
CMD
Device
Application
Agent
Application Server –
MS-Exchange / MS-SQL
Note: Depending on the configuration, backup servers are not required for SQL Server
configurations.
Page 1-18
Hitachi Replication Manager Overview
Components
Components
Management Client: A management client runs on a web browser and provides access to the instance
of Replication Manager.
o CCI and a Device Manager agent are installed on each pair management server for open
systems
Note: When determining whether to set up pair management servers to be independent of hosts,
consider security and the workloads on the hosts.
Host (Application Server): Application programs are installed on a host. A host can be used as a pair
management server, if required. The Device Manager agent is optional if the server is used as a host
(and not pair management server).
Page 1-19
Hitachi Replication Manager Overview
Device Manager Agent
Device Manager agent is a program that runs on a host to collect host and
storage system information, and reports that data to the Device Manager server.
It collects:
• Host machine information, such as host names, IP addresses, Host bus
adapter (HBA) worldwide name (WWN), and iSCSI name
• Information about LDEVs allocated to the host, such as LDEV number,
storage system, logical unit number (LUN), and LDEV type
• Information about file systems allocated to the host, such as file system types,
mount points, and usage
• Copy pair information, such as pair types and statuses
Replication Manager management server uses this information for displaying and
managing the pair information
Device Manager agent is the common agent for Device Manager and
Replication Manager
Download the agent installer from the Device Manager web client
Page 1-20
Hitachi Replication Manager Overview
Instructor Demonstration - Hitachi Command Suite
– Installation
Instructor Demonstration
- Hitachi Command Suite
Page 1-21
Hitachi Replication Manager Overview
Module Summary
Module Summary
Module Review
Page 1-22
2. Hitachi Replication Manager Initial Setup
Module Objectives
Page 2-1
Hitachi Replication Manager Initial Setup
Initial Setup
Initial Setup
Prerequisites validation
Configure environment
Page 2-2
Hitachi Replication Manager Initial Setup
Prerequisites
Prerequisites
Prerequisite Software
Configure Hitachi Device Manager: After installing Device Manager, add to Device Manager
the storage systems, hosts, and pair management servers to be managed in Replication
Manager.
Note: HDvM supports agent-less discovery of hosts using the host data collector. The agent-less
discovery is used for reporting host information and does not support replication operations.
For performing replication operations using Replication Manager, a pair management server
must be set up with HDvM agent, CCI and Command Device.
Page 2-3
Hitachi Replication Manager Initial Setup
Configuring the Environment
In the Web browser address bar, enter the URL for the management server where Replication
Manager is installed. The user login window appears. When you log in to Replication Manager
for the first time, you must use the built-in default user account and then specify Replication
Manager user settings. The user ID and password of the built-in default user account are as
follows:
If Replication Manager user settings have already been specified, you can use the user ID and
password of a registered user to log in. If you enabled authentication using an external
authentication server, use the password registered in that server.
Page 2-4
Hitachi Replication Manager Initial Setup
Launching Hitachi Command Suite
Hitachi Replication Manager can also be launched from the HCS main window Tools menu
option.
Page 2-5
Hitachi Replication Manager Initial Setup
Registering Information Sources
Before you can use Replication Manager to manage resources, you must register an information
source. In open systems, this information source is the Device Manager server. In mainframe
systems, this information source is either Business Continuity Manager or Mainframe Agent.
Once the information sources are registered, you can view host information, information about
the connected storage systems, and copy pair configuration information as Replication Manager
resources. You can register a maximum of 100 information sources.
Page 2-6
Hitachi Replication Manager Initial Setup
Registering Information Sources
Local Device Manager server will automatically become Information Source. If you would like to
add more servers, then ensure that you have the following Device Manager server information:
• Port number (the server.http.port value in the server.properties file for the Device
Manager server)
• User ID and password where you can log in to the Device Manager server
Page 2-7
Hitachi Replication Manager Initial Setup
Refreshing Configuration from Information Sources
Replication Manager repository gets synchronized with the local Device Manager server
automatically. Any addition of a new Information Source should be followed by a Refresh
Configuration.
Note: From HRpM, configuration information managed by local instances of Device Manager is
automatically applied to Replication Manager. It is no longer necessary to refresh the
configuration for local instances of Device Manager.
Page 2-8
Hitachi Replication Manager Initial Setup
Information Refresh in Replication Manager
The following information is automatically refreshed every five minutes, regardless of the
refresh settings for configuration information:
• Pool status
Page 2-9
Hitachi Replication Manager Initial Setup
Refreshing Information from Pair Management Servers
Specify the copy pair status refresh interval for the pair management server that belongs to the
information source. If you change the pair status refresh interval settings in this item, the new
settings replace the settings made for each pair management server in the Edit Interval of
Refresh Pair Status - pair-management-server-name dialog box.
Specify the copy pair status refresh interval by refreshing Device Manager when monitoring
copy pairs that are not managed by the pair management server.
Page 2-10
Hitachi Replication Manager Initial Setup
Refreshing Information from Pair Management Servers
The information-source-name (Device Manager) sub-window lets you view the pair status
refresh interval for pair management servers managed by the Device Manager server.
Page 2-11
Hitachi Replication Manager Initial Setup
Users and Permissions
Permits the user to log in, use all Command Suite products, and set up
User Management Admin*
other users.
Replication Manager Permits the user to set up Replication Manager resources and the
Management accessible ranges (resource groups) for all users. This role also enables
Admin
the user to perform all administrative tasks within the resource groups
except specifying user settings.
* By default, users who have the Admin permission of the User Management role cannot
perform any Replication Manager operations other than user management. To perform these
operations, such users must be granted the Replication Manager management permissions.
Page 2-12
Hitachi Replication Manager Initial Setup
Managing Users and User Permissions
All users can set up personal profiles and Replication Manager licenses
regardless of their permissions
The built-in user ID System lets you manage all users in Hitachi Command Suite
You cannot change or delete this user ID or its permissions
Page 2-13
Hitachi Replication Manager Initial Setup
Adding Users and Assigning Permissions
1. From the Explorer menu, choose Administration and then Users and Permissions.
2. Expand the object tree, and then select Users.
3. Click Add User. The Add User dialog box appears.
4. Enter the user details and then click OK.
Managing Security
Page 2-14
Hitachi Replication Manager Initial Setup
Sites
Sites
Sites Overview
Notes:
• You can specify hosts, storage systems, application and copy pair configuration
definitions (pair management servers) for any site. Although you can specify more than
one resource for each site, you cannot specify a particular resource for more than one
site.
• With Replication Manager, you can use the GUI to define logical sites just as you would
define actual physical sites (actual data centers). If you set up separate sites, you can
manage resources more efficiently because the GUI makes it easy to locate a required
resource among the many resources displayed.
Page 2-15
Hitachi Replication Manager Initial Setup
Example of Two Data Centers — Use Case
DB DB Backup
Command
Server Server Command
Device Device
Subsystem1 Subsystem3
Remote Copy (Universal Replicator)
Page 2-16
Hitachi Replication Manager Initial Setup
Example of Two Data Centers – Use Case
Site configuration
• Create a primary site and remote site and place each server based on its
physical location
Place local PM
server, DB server,
and mail server into
primary site
Place remote PM
server, DB backup
server and mail
backup server into
remote site
In the diagram and following pages, CMD stands for Command Device.
Objective
• Easily perform the pair management operation on the Site menu with
structured resources
• Easily find the target
volumes or pair
management servers
with site structure
Pair configuration
wizard provides the
filtering function with
site
Page 2-17
Hitachi Replication Manager Initial Setup
Site Example
Site Example
Example of sites
Site Properties
Page 2-18
Hitachi Replication Manager Initial Setup
Setting Up Sites
Setting Up Sites
Adding a site
1. In the Explorer menu, click the Shared Views drawer to select the Sites option.
3. Enter the site name in the Name field and then click OK.
Page 2-19
Hitachi Replication Manager Initial Setup
Setting Up Sites
Page 2-20
Hitachi Replication Manager Initial Setup
Resource Groups
Resource Groups
• Multiple resources can be registered in each resource group, but each resource can be
registered in only one resource group.
• A user can be granted access permissions for multiple resource groups (that is, the user
can be associated with more than one resource group).
• The default group All Resources cannot be deleted or renamed. A new resource group
named All Resources cannot be added.
• Because a user logged in with the built-in account, System (the built-in account) is
permitted to access all resources; the user is automatically registered in the All
Resources group.
• Any user can be added to the All Resources group if they do not belong to another
resource group.
• Except for users logged in as System, users with the Admin (user management)
permission can belong to resource groups only when they also have the Admin, Modify,
or View (Replication Manager management) permission.
Page 2-21
Hitachi Replication Manager Initial Setup
Resource Groups Overview
Page 2-22
Hitachi Replication Manager Initial Setup
Sites and Resource Group Relationship
Page 2-23
Hitachi Replication Manager Initial Setup
Example of Two Data Centers – Use Case
This is the same use case presented for Sites. This shows how to create resource groups and
how users are given control to particular resources in Primary Site and Remote Site so that they
can execute the volume copy operations.
Mail
• Assign mail admin to Mail Mail Backup
Server
mail resource group
CMD
Resource Server CMD
In this diagram and the following slides, UR stands for Hitachi Universal Replicator software.
Page 2-24
Hitachi Replication Manager Initial Setup
Example of Two Data Centers – Use Case
Objective
• Prevent the malicious activity or operational
Users - Sys Admin
errors by dividing the access scope - DB Admin
- Mail Admin
Mail admin can
Primary Site Remote Site
monitor the copy PM Server
Local PM Remote PM
pairs within the Resource
Group Server Pair Management
Server
Servers
assigned resource
DB Backup
group DB
Resource
CMD DB Server
Server
CMD
Subsystem3
Group Subsystem1
Remote Copy (UR)
Mail
Mail Mail Backup
CMD
Resource Server
Server CMD
In the diagram:
• The Primary site contains: local PM server, DB server, mail server, subsystem1
and subsystem2.
• The Remote site contains: remote PM server, DB backup server, mail backup
server, subsystem3 and subsystem4.
• There are three resource groups: PM server resource group, DB resource group,
and mail resource group.
• The user, Sys Admin, belongs to the default All Resources group, therefore has
access to all resources on the primary site and remote site and can manage all
copy pairs.
• The user, DB Admin, belongs to DB resource group, therefore has access only
to the DB server, DB backup server, subsystem1 and subsystem3.
• The user, Mail Admin, belongs to mail resource group, therefore has access only
to the mail server, mail backup server, subsystem2 and subsystem4.
Page 2-25
Hitachi Replication Manager Initial Setup
Resource Group Function
User Defined
Hosts Subsystems
1. Create users.
2. Assign permissions to the users based on whether they will be managing Replication
Manager or they will also be creating other users.
Page 2-26
Hitachi Replication Manager Initial Setup
Resource Groups
Resource Groups
1. In the Explorer menu, click the Administration drawer and then select the Resource
Groups option.
Page 2-27
Hitachi Replication Manager Initial Setup
Resource Groups
Add hosts
Assign Hosts
3. Click Add Hosts on the bottom-right of the Application area. The Add Hosts dialog box
appears.
4. Select the host’s check boxes to add those hosts and then click OK.
Page 2-28
Hitachi Replication Manager Initial Setup
Resource Groups
Add Resources
5. Select the storage system check box to add that system and then click OK.
Page 2-29
Hitachi Replication Manager Initial Setup
Resource Groups
Add users
Assign Users
3. Click Add Users on the bottom-right of Application area. The Add Users dialog box
appears.
4. Select the user’s checkboxes to add those users and then click OK.
Page 2-30
Hitachi Replication Manager Initial Setup
Resource Group Properties
Page 2-31
Hitachi Replication Manager Initial Setup
Instructor Demonstration
Instructor Demonstration
Initial setup
• Users and permissions
• Refresh settings
• Sites
• Resource groups
Instructor
Demonstration
Page 2-32
Hitachi Replication Manager Initial Setup
Module Summary
Module Summary
Page 2-33
Hitachi Replication Manager Initial Setup
Module Review
Module Review
Page 2-34
3. Hitachi Replication Products Overview
Module Objectives
Page 3-1
Hitachi Replication Products Overview
Hitachi Replication Program Products
• Benefits
Protects data availability
Simplifies and increases disaster recovery testing
Eliminates the backup window
Reduces testing and development cycles
Enables nondisruptive sharing of critical information
Page 3-2
Hitachi Replication Products Overview
Hitachi Replication Products
• Benefits P-VOL
• Benefits Pool
Protects data availability with rapid restore
Simplifies and increases disaster recovery testing
Eliminates the backup window
Reduces testing and development cycles V - VOL V - VOL V - VOL
Page 3-3
Hitachi Replication Products Overview
Hitachi Replication Products
Provides fast recovery with no data loss Improves customer service by reducing downtime of
Installed in the highest profile DR sites around the customer-facing applications
world
Increases the availability of revenue-producing
applications
P-VOL S-VOL
The following describes the basic technology behind the disk-optimized journals.
Page 3-4
Hitachi Replication Products Overview
Tools Used for Setting Up Replication
• RAID Manager/CCI
HORCM configuration files
Command device
Any volumes involved in replication operations (source and destination) should be:
• Same size (in blocks)
Note: If the source is a LUSE volume, then the destination must be an identical LUSE volume
with the same size and structure. This will reduce the number of copy pairs possible.
Page 3-5
Hitachi Replication Products Overview
Basic Operations
Basic Operations
Replication Operations
A copy pair is a pair of volumes linked by the storage system's volume replication functionality
(such as ShadowImage and TrueCopy). Copy pairs are also called paired volumes.
• Secondary volume (S-VOL): The destination volume to which the contents of the
primary volume are copied.
Page 3-6
Hitachi Replication Products Overview
Copy Operations
Copy Operations
Page 3-7
Hitachi Replication Products Overview
Replication Operations
Replication Operations
paircreate
• Select a volume and issue paircreate
• Initial copy (full copy) takes place. Track by track copy of P-VOL to S-VOL
happens regardless of the amount of data in P-VOL
• Volume status changes from SMPL to PAIR
P-VOL S-VOL
Time
VOL Status Host Access VOL Status Host Access
frame
The paircreate command generates a new volume pair from two unpaired volumes. The
paircreate command can create either a paired logical volume or a group of paired volumes.
When issuing paircreate, you can select the pace for the initial copy operation. The pace can
be specified in terms of number of tracks to copy at a time (1-15). Less number of tracks
minimizes the impact of software operations on system I/O performance, while higher number
of tracks completes the initial copy operation as quickly as possible. The best timing is based on
the amount of write activity on the P-VOL and the amount of time elapsed between update
copies.
Simplex (SMPL) status of a volume indicates that the volume is not used in any replication
operation.
Page 3-8
Hitachi Replication Products Overview
Replication Operations
pairsplit
• Update copy takes place to flush all pending changes
• Volume status changes to PSUS
• S-VOL is the Point-in-Time (PiT) copy and now available to applications for
read/write
• Differential bitmaps track changes to P-VOL and S-VOL while pair is split
P-VOL S-VOL
Time
VOL Status Host Access VOL Status Host Access
frame
The pairsplit command stops updates to the secondary volume of a pair and can either
maintain (status = PSUS) or delete (status = SMPL) the pairing status of the volumes. It can be
applied to a paired logical volume or a group of paired volumes. The pairsplit command allows
read access or read/write access to the secondary volume, depending on the selected options.
Page 3-9
Hitachi Replication Products Overview
Replication Operations
pairresync
• S-VOL is no longer available to host
• S-VOL and P-VOL differential bitmaps are merged
• Changed tracks are marked and written from P-VOL to S-VOL
• Volume status changes to pair
P-VOL S-VOL
Time
VOL Status Host Access VOL Status Host Access
frame
Replication software allows you to perform pairresync operations on split and suspended pairs:
• Pairresync for split pair – When a pairresync operation is performed on a split pair
(status = PSUS), the system merges the S-VOL track map into the P-VOL track map and
then copies all flagged tracks from the P-VOL to the S-VOL. This also greatly reduces the
time needed to resynchronize the pair.
Page 3-10
Hitachi Replication Products Overview
Replication Operations
pairsplit –S (delete)
• Delete the pair and stop replication operations for the pair
• Immediate access to S-VOL
No update copy (pending changes are lost, ignored)
• Changes volume status back to simplex
P-VOL S-VOL
Time
VOL Status Host Access VOL Status Host Access
frame
The pairsplit -S operation (delete pair) stops the copy operations to the S-VOL of the pair and
changes the pair status of both volumes to SMPL.
When a pair is deleted, the pending update copy operations for the pair are discarded, and the
status of the P-VOL and S-VOL is changed to SMPL.
Page 3-11
Hitachi Replication Products Overview
Module Summary
Module Summary
Module Review
Page 3-12
4. Hitachi ShadowImage Replication
Operations with Replication Manager
Module Objectives
Page 4-1
Hitachi ShadowImage Replication Operations with Replication Manager
Licensing Considerations
Licensing Considerations
Additional license capacity is required for P-VOLs and pool volumes that
are used by Hitachi Copy-on-Write Snapshot
If Hitachi Dynamic Provisioning volumes are used as P-VOLs or S-VOLs on enterprise storage:
• The capacity of the pool used by the Dynamic Provisioning volume will affect the license
capacity.
• Include the Dynamic Provisioning pool capacity when determining the ShadowImage
license capacity.
• If the amount of data exceeds the license capacity, you can use the volumes for an
additional 30 days. Once 30 days have passed, you cannot do any operations except
suspending or deleting pairs.
Page 4-2
Hitachi ShadowImage Replication Operations with Replication Manager
ShadowImage In-System Replication Features
ShadowImage Replication
Full physical copy of a volume at a
point in time
Production Copy of
Immediately available for concurrent Volume Production
use by other applications (P-VOL) Volume
(S-VOL)
No host processing cycles required
No dependence on operating system, Normal Point-in-time
Processing Copy for
file system, or database continues parallel
unaffected processing
All copies are additionally RAID
protected
Up to 9 copies for a source volume
(enterprise storage)
ShadowImage copies:
• Can provide immediate access and sharing of information for decision support, testing
and development.
Page 4-3
Hitachi ShadowImage Replication Operations with Replication Manager
Key Features
Key Features
Restrictions
• The following volumes cannot be used for creating pairs
Hitachi Universal Replicator journal volumes
Virtual volumes (except Dynamic Provisioning volumes)
Copy-on-Write pool volumes
Network attached storage (NAS) system volumes cannot be
S-VOLs
Any data retention volume set as “S-VOL DISABLE”
Data Retention Utility allows you to assign the S-VOL Disable attribute. This could be used for
production volumes to protect them from accidental overwriting due to a copy operation.
Page 4-4
Hitachi ShadowImage Replication Operations with Replication Manager
ShadowImage Commands
ShadowImage Commands
When issuing paircreate, you can select the pace for the initial copy operation:
• Slower
• Medium
• Faster
The slower pace minimizes the impact of operations on system I/O performance, while the
faster pace completes the initial copy operation as quickly as possible. The best timing is based
on the amount of write activity on the P-VOL and the amount of time elapsed between update
copies.
Page 4-5
Hitachi ShadowImage Replication Operations with Replication Manager
Paircreate
Paircreate
SMPL Start
P-VOL S-VOL
COPY (PD)
P-VOL S-VOL Initial
Copy
PAIR Finished
P-VOL S-VOL
Initial Copy operation takes place when you create a new volume pair. The Initial Copy
operation copies all data on the P-VOL to the associated S-VOL. The P-VOL remains available to
all hosts for read and write I/Os throughout the Initial Copy operation. Write operations
performed on the P-VOL during the Initial Copy operation will be duplicated at the S-VOL by
Update Copy operations after the initial copy is complete. The status of the pair is COPY(PD)
(PD = pending) while the Initial Copy operation is in progress. The status changes to PAIR
when the initial copy is complete.
Definitions:
• VLL stands for Virtual Logical LUN, the method used to create custom volume sizes.
Page 4-6
Hitachi ShadowImage Replication Operations with Replication Manager
Paircreate
SVOL
WriteData
Write
Write Data
Data PVOL SVOL Total 9 Copies:
SVOL Three Level 1’s
Max. 2 SVOL And
Six Level 2’s
SVOL
SVOL
Max. 2 SVOL SVOL
Hitachi ShadowImage for Mainframe protects mainframe data in the same manner. For
mainframes, ShadowImage Heterogeneous Replication can provide up to 3 duplicates of 1
primary volume.
In Storage Navigator the paircreate command creates the first Level 1 “S” volume. The set
command can be used to create the Level 1 “S” volumes. And the cascade command can be
used to create the Level 2 “S” volumes off the Level 1 “S” volumes.
Page 4-7
Hitachi ShadowImage Replication Operations with Replication Manager
Paircreate
The differential data (updated by write I/Os during split or suspension) between the primary
data volume and the secondary data volume is stored in each track bitmap. When a split or
suspended pair is resumed (pairresync), the primary storage system merges the primary data
volume and secondary data volume bitmaps, and the differential data is copied to the
secondary data volume.
Page 4-8
Hitachi ShadowImage Replication Operations with Replication Manager
Update
Update
Update copy operations
Host I/O
Differential Data
P-VOL S-VOL
Update Copy
PAIR
P-VOL S-VOL
The Update Copy operation sends changed data to the S-VOL of a pair after the Initial Copy operation is
complete. Update Copy operations take place only for duplex pairs (status = PAIR).
As write I/Os are performed on a duplex P-VOL, the system stores a map of the P-VOL differential data,
and then performs Update Copy operations periodically based on the amount of differential data present
on the P-VOL, as well as the elapsed time between Update Copy operations.
The Update Copy operations are not performed for pairs with the following status:
Page 4-9
Hitachi ShadowImage Replication Operations with Replication Manager
Pairsplit
Pairsplit
HOST I O
3, 10, 15,18
Dirty Tracks
Tracks 3, 10 15, and 18 sent from P-VOL
HOST I O
3, 10, 15,18
to S-VOL
HOST I O
HOST I O
P-VOL S-VOL
1. The P-VOL and S-VOL are in PAIR status as of 10:00 AM. P-VOL Tracks 3, 10, 15 and 18
are marked as dirty because of Host I/O.
2. At 10:00:01 AM, a pairsplit (Steady) command is issued. Tracks 3, 10, 15 and 18 are
sent across to the S-VOL from the P-VOL.
3. When the update operation in step 2 is complete, the status of the P-VOL and S-VOL is
changed to PSUS. During this state, there are track bitmaps attached to both the P-VOL
and the S-VOL. These bitmaps keep track of changes on both the P-VOL and the S-VOL.
Page 4-10
Hitachi ShadowImage Replication Operations with Replication Manager
Pairsplit
HOST I O
3, 10, 15,18
Dirty Tracks
Status Immediatly
Dirty Tracks
HOST I O
HOST I O
changes to PSUS
P-VOL S-VOL
HOST I O
HOST I O
from P-VOL to S-VOL in the
background
1. The P-VOL and S-VOL are in PAIR status as of 10:00 AM. P-VOL Tracks 3, 10, 15 and 18
are marked as dirty because of host I/O.
2. The status of the P-VOL and the S-VOL is changed instantly to PSUS and the S-VOL is
immediately available for reads and writes.
3. Tracks 3, 10, 15 and 18 are sent across to the S-VOL from the P-VOL in the background.
o If during this Update Copy operation there is any I/O to tracks 3, 10, 15, or 18
on the S-VOL, then the system fetches the data from the P-VOL.
o During the PSUS state, there are track bitmaps attached to both the P-VOL and
the S-VOL. These bitmaps keep track of changes on both the P-VOL and the S-
VOL.
Page 4-11
Hitachi ShadowImage Replication Operations with Replication Manager
Pairsplit
SVOL | X | X | | | | | | | | | | | X | | | . . . .
This diagram illustrates the usage of differential bitmaps after a pair is suspended. While
suspended, updates can occur to P-VOL. Changes can also occur on S-VOL if it is mounted with
Write Enabled. The bitmaps denote any changed tracks while the pair is suspended.
Page 4-12
Hitachi ShadowImage Replication Operations with Replication Manager
Pairresync
Pairresync
HOST I O
HOST I O
10,15,18,29 10,19, 23
P-VOL S-VOL
HOST I O
10,15,18,19,23,29
sent from P-VOL to
S-VOL
P-VOL Updates --> S-VOL
Dirty Tracks
HOST I O
P-VOL Asynchronous Updates S-VOL
1. The status of the P-VOL and the S-VOL is PSUS as of 10:00 AM. P-VOL Tracks 10, 15, 18
and 29 are marked as dirty.
2. Tracks 10, 19 and 23 are marked as dirty on the track bitmap for the S-VOL.
3. At 10:00 AM, a pairresync (Normal) command is issued. The track bitmaps for the P-
VOL and S-VOL are merged. The resulting track bitmap has tracks 10, 15, 18, 19, 23
and 29 marked as dirty. These tracks are sent from the P-VOL to the S-VOL as part of
an Update Copy operation.
4. When the Update Copy operation in step 2 is complete, the P-VOL and S-VOL are
declared as a PAIR.
Page 4-13
Hitachi ShadowImage Replication Operations with Replication Manager
Pairresync
HOST I O
HOST I O
10,15,18,29 10,19, 23
P-VOL S-VOL
Dirty Tracks
Status changes to
HOST I O
PAIR Immediatly
HOST I O
sent from P-VOL to
S-VOL in the
background
P-VOL Updates --> S-VOL
1. The status of the P-VOL and the S-VOL is PSUS as of 10:00 AM. Tracks 10, 15, 18 and
29 are marked as dirty on the track bitmap for the P-VOL. Tracks 10, 19 and 23 are
marked as dirty on the track bitmap for the S-VOL.
2. At 10:00 AM, a pairresync (normal) command is issued. The status of the P-VOL and
the S-VOL changes instantly to PAIR.
3. The track bitmaps for the P-VOL and S-VOL are merged. The resulting track bitmap has
tracks 10, 15, 18, 19, 23 and 29 marked as dirty. These tracks are sent from the P-VOL
to the S-VOL as part of a Update Copy operation in the background.
Quick resync
• Command completed in less than one sec/pair
• Copies only delta bitmap
• Delta data will be copied during PAIR status
• Command by RAID Manager
No host
Read/Write Read/Write Read/Write access
Delta bitmap Delta bitmap
Page 4-14
Hitachi ShadowImage Replication Operations with Replication Manager
Pairresync
• Merge
P-VOL |X| |X|X| | |X| |X| |X| | | | |. . . .
This diagram illustrates the use of differential bitmaps, after a pair is resynchronized.
While suspended, updates may occur to both primary and secondary volumes. The bitmaps
denote any changed tracks while the pair is suspended.
When the resync command is issued, the S-VOL differential bitmaps are merged into the P-
VOL differential bitmaps. Then all of the changed tracks are copied from the P-VOL to the S-
VOL. This process results in overwrites for any changed S-VOL data.
For a resync restore operation, P-VOL bitmaps are merged into the S-VOL bitmaps and all
changed tracks are written from the S-VOL to the P-VOL, thus overwriting production data.
Page 4-15
Hitachi ShadowImage Replication Operations with Replication Manager
Pairresync
(Restore)
1. The status of the P-VOL and the S-VOL is PSUS as of 10:00 AM. Tracks 10, 15, 18 and
29 are marked as dirty on the track bitmap for the P-VOL. Tracks 10, 19 and 23 are
marked as dirty on the track bitmap for the S-VOL.
2. At 10:00 AM, a pairresync (restore) command is issued. The track bitmaps for the P-
VOL and S-VOL are merged. The resulting track bitmap has tracks 10, 15, 18, 19, 23
and 29 marked as dirty. These tracks are sent from the S-VOL to the P-VOL as part of
an Update Copy operation.
3. When the Update Copy operation in step 2 is complete, the P-VOL and S-VOL are
declared as a PAIR.
Page 4-16
Hitachi ShadowImage Replication Operations with Replication Manager
Pairresync
HOST I O
HOST I O
10,15,18,29 10,19, 23
LDEV 2:03 (RAID
Group 1-1)
LDEV 1:04 (RAID
P-VOL Group 2-3) S-VOL
Dirty Tracks
LDEV locations
are Swapped
P-VOL S-VOL
Dirty Tracks
HOST I O
LDEV 2:03 (RAID LDEV 1:04 (RAID
Group 2-3) Group 1-1)
1. The status of the P-VOL and the S-VOL is PSUS as of 10:00 AM. Tracks 10, 15, 18 and
29 are marked as dirty on the track bitmap for the P-VOL. Tracks 10, 19 and 23 are
marked as dirty on the track bitmap for the S-VOL. The P-VOL LDEV ID is 2:03 and the
RAID Group that the PVOL belongs to is 1-1. The S-VOL LDEV ID is 1:04 and the RAID
Group that the S-VOL belongs to is 2-3.
2. At 10:00:01, a Quick Restore command is issued. The LDEV locations are swapped so
that the P-VOL now belongs to RAID Group 2-3 and the S-VOL now belongs to RAID
Group 1-1.
3. At 10:00:03, after the SWAP operation is complete the P-VOL and S-VOL are declared as
a PAIR.
4. If you want to swap back, just do a new pairresync restore (quick) at a later point in
time.
Page 4-17
Hitachi ShadowImage Replication Operations with Replication Manager
Pairresync
Quick restore
• Extremely fast recovery of P-VOL from an S-VOL
Quick resync
• Rapid resynchronization of an S-VOL from a P-VOL
Quick split
• Ability to rapidly suspend mirroring operation and provide availability to an
S-VOL
Page 4-18
Hitachi ShadowImage Replication Operations with Replication Manager
Commands
Commands
paircreate
• Select a volume and issue paircreate
• Initial copy takes place
• Volume status changes to PAIR
P-VOL S-VOL
Time Frame VOL Status Host Access VOL Status Host Access
The paircreate command generates a new volume pair from two unpaired volumes. It can
create either a paired logical volume or a group of paired volumes.
pairsplit – Steady
• Update copy takes place
• Volume status changes to PSUS
• S-VOL is now available
• Differential bitmaps track changes to P-VOL and S-VOL while pair is split
P-VOL S-VOL
Time Frame VOL Status Host Access VOL Status Host Access
The pairsplit command stops updates to the secondary volume of a pair and can either
maintain (status = PSUS) or delete (status = SMPL) the pairing status of the volumes. It can be
applied to a paired logical volume or a group of paired volumes. The pairsplit command allows
read access or read/write access to the secondary volume, depending on the selected options.
You can create and split ShadowImage pairs simultaneously using the -split option of the
paircreate command.
Page 4-19
Hitachi ShadowImage Replication Operations with Replication Manager
Commands
pairsplit – Quick
• Volume status changes to PSUS
• Update copy takes place in background
• S-VOL is available instantly
• Differential bitmaps track changes to P-VOL and S-VOL while pair is split
P-VOL S-VOL
Time Frame VOL Status Host Access VOL Status Host Access
The pairsplit Quick operation speeds up the normal pairsplit operation by changing
the pair status to PSUS first and copying the data in the background.
Page 4-20
Hitachi ShadowImage Replication Operations with Replication Manager
Commands
pairresync – Normal
• The data on S-VOL becomes no longer available
• S-VOL and P-VOL differential bitmaps are merged
• Changed tracks are marked and written from P-VOL to S-VOL
• Volume status changes to PAIR
P-VOL S-VOL
Time Frame VOL Status Host Access VOL Status Host Access
Page 4-21
Hitachi ShadowImage Replication Operations with Replication Manager
Commands
pairresync – Quick
• S-VOL is no longer available to host
• Volume status changes to PAIR
• S-VOL and P-VOL differential bitmaps are merged
• Changed tracks are marked and written from P-VOL to S-VOL in the
background
P-VOL S-VOL
Time Frame VOL Status Host Access VOL Status Host Access
Page 4-22
Hitachi ShadowImage Replication Operations with Replication Manager
Commands
pairresync – Restore
• S-VOL is no longer available to host
• S-VOL and P-VOL differential bitmaps are merged
• Changed tracks are marked and written from S-VOL to P-VOL
• Volume status changes to PAIR
Can only be done from L1 to P-VOL
P-VOL S-VOL
Time frame VOL Status Host Access VOL Status Host Access
The restore pairresync operation synchronizes the P-VOL with the S-VOL. The copy direction
for a reverse pairresync operation is S-VOL to P-VOL.
The pair status during a restore resync operation is COPY(RS-R), and the P-VOL and S-VOL
become inaccessible to all hosts for write operations. As soon as the reverse pairresync
operation is complete, the P-VOL becomes accessible. The restore pairresync operation can
only be performed on split pairs, not on suspended pairs. The restore pairresync operation
cannot be performed on L2 cascade pairs.
The P-VOL remains read-enabled during the restore pairresync operation only to enable the
volume to be recognized by the host. The data on the P-VOL is not guaranteed until the restore
pairresync operation is complete and the status changes to PAIR.
Page 4-23
Hitachi ShadowImage Replication Operations with Replication Manager
Commands
Time Frame VOL Status Host Access VOL Status Host Access
The Quick Restore operation speeds up the reverse resync operation by changing the volume
map to swap the contents of the P-VOL and S-VOL without copying the S-VOL data to the P-
VOL. P-VOL and S-VOL are resynchronized when update copy operations are performed for
pairs in the PAIR status. The pair status during a Quick Restore operation is COPY(RS-R) until
the volume map change is complete. P-VOL and S-VOL become inaccessible to all hosts for
write operations during a quick restore operation. Quick restore cannot be performed on L2
cascade pairs.
The P-VOL remains read-enabled during the Quick Restore operation only to enable the volume
to be recognized by the host. The data on the P-VOL is not guaranteed until the Quick Restore
operation is complete and the status changes to PAIR.
Page 4-24
Hitachi ShadowImage Replication Operations with Replication Manager
Commands
After a PAIR
abcde 12345 Quick Restore 123456 abcde
Operation while
abcde, 12345: User Data Without Swap and Freeze Option 12345 12345
PAIR
The Swap&Freeze option allows the S-VOLs of a ShadowImage pair to remain unchanged
after the Quick Restore operation. If the Quick Restore operation is performed on a
ShadowImage pair with the Swap and Freeze option, Update Copy operations are suppressed,
and are thus not performed for pairs in the PAIR status after the Quick Restore operation. If the
quick restore operation is performed without the Swap and Freeze option, the P-VOL and S-VOL
are resynchronized when Update Copy operations are performed for pairs in the PAIR status.
Note: Make sure that the Swap and Freeze option remains in effect until the pair status changes
to PAIR after the quick restore operation. The quick restore is done from CCI but the Swap and
Freeze option is set from Storage Navigator.
Page 4-25
Hitachi ShadowImage Replication Operations with Replication Manager
Commands
pairsplit –E (suspend)
• Immediate access to S-VOL
No update copy
• Forces an initial copy on resync
Marks the entire P-VOL as dirty
P-VOL S-VOL
Time Frame VOL Status Host Access VOL Status Host Access
The ShadowImage pairsplit -E operation suspends the ShadowImage copy operations to the S-
VOL of the pair. A user can suspend a ShadowImage pair at any time. When a ShadowImage
pair is suspended (status = PSUE) the system stops performing ShadowImage copy operations
to the S-VOL, continues accepting write I/O operations to the P-VOL and marks the entire P-
VOL track map as difference data. When a pairresync operation is performed on a suspended
pair, the entire P-VOL is copied to the S-VOL. The reverse and quick restore pairresync
operations cannot be performed on suspended pairs.
The subsystem automatically suspends a ShadowImage pair when it cannot keep the pair
mirrored for any reason. When the subsystem suspends a pair, sense information is generated
to notify the host. The subsystem automatically suspends a pair under the following conditions:
• When the ShadowImage volume pair has been suspended or deleted from the UNIX/PC
server host using CCI
• When the storage system detects an error condition related to an update copy operation
• When the P-VOL and/or S-VOL track map in shared memory is lost (for example, due to
offline microprogram exchange). This applies to COPY(SP) and PSUS(SP) pairs only. For
PAIR, PSUS, COPY(RS), or COPY(RS-R) pairs, the pair is not suspended, but the entire
P-VOL (S-VOL for reverse or quick restore pairresync) is marked as difference data.
Page 4-26
Hitachi ShadowImage Replication Operations with Replication Manager
Launching ShadowImage Operations
Pair Management operations can be launched from either the Host view or the Storage
System view.
Page 4-27
Hitachi ShadowImage Replication Operations with Replication Manager
Pair Configuration
Pair Configuration
To define copy pair configurations, you should first register a new pair group and define a list of
volume pairs to assign to the pair groups.
Pair groups can be created on the 2. Pair Association page of the Pair Configuration Wizard.
Page 4-28
Hitachi ShadowImage Replication Operations with Replication Manager
Pair Configuration
Page 4-29
Hitachi ShadowImage Replication Operations with Replication Manager
Pair Configuration
1. In the Pairs pane under Detail of pair-group-name pane, select a primary volume.
2. In the Criteria tab under the Candidate List pane, specify the volume type and
optional filtering criteria for obtaining a list of candidate volumes.
3. Click Apply. The filtered list of candidate volumes is displayed on the Result tab.
4. From the displayed tree structure on the Results tab, select the candidate volumes that
you want to assign as the primary volume or secondary volumes for the new copy pairs.
You can select multiple volumes on the Result tab.
5. Click Add.
6. The selected volumes are assigned as secondary volumes and the defined copy pair is
displayed in the Pair List pane. Repeat this operation for each pair group you create.
7. Click Next to continue creating the copy pair configuration definition or click Save to
temporarily save the workflow.
Page 4-30
Hitachi ShadowImage Replication Operations with Replication Manager
Pair Configuration
Page 4-31
Hitachi ShadowImage Replication Operations with Replication Manager
Pair Configuration
Page 4-32
Hitachi ShadowImage Replication Operations with Replication Manager
Pair Configuration
A copy group consists of a number of copy pairs that have been grouped for management
purposes. By grouping the copy pairs of volumes that are used for the same operations and
purposes, you can perform batch operations on all the copy pairs in that copy group. For
example, by performing an operation such as changing the copy pair status on a copy group,
you can change the copy pair status of all copy pairs in the copy group in a single operation.
Specify copy group name and CCI configuration definition file related information (instance
number and communication port number).
The CCI configuration file can be placed separately for the primary and secondary volume
instances.
The access/operations on the copy group is defined by the access permission on the pair
management server on which the copy group is defined. The access permissions to the pair
management server is in turn defined by the resource group to which the server belongs.
Page 4-33
Hitachi ShadowImage Replication Operations with Replication Manager
Pair Configuration
Select the checkbox for the pair and click Apply to add the pair group instance to the newly
created copy group instance.
After this step the topology view displays copy group attribute as Assigned.
Page 4-34
Hitachi ShadowImage Replication Operations with Replication Manager
Pair Configuration
Select whether to execute the tasks immediately, or at a specified date and time.
Execute Immediately – If you want to execute the task immediately, select this radio button.
The task will start when the pair configuration wizard ends.
Execution Date – Select this radio button to execute the task at the specific date and time
that you select from the drop-down list.
Modify Pair Configuration File Only (Do not create Pair) – Select this check box if you do
not want the task to create a copy pair. When the check box is selected, the task only modifies
the CCI configuration definition file. This item is displayed when the task type is create.
Page 4-35
Hitachi ShadowImage Replication Operations with Replication Manager
Pair Configuration
Page 4-36
Hitachi ShadowImage Replication Operations with Replication Manager
Checking Task Status
Task Status displays the execution status of the task as one of the following:
• Failure: Indicates that the task failed. When you select Failure, an error window appears.
Read the message in the error window.
• Warning: Indicates that the system timed out waiting for the task to finish processing.
When you select Warning, an error window appears. Read the message in the error
window.
Page 4-37
Hitachi ShadowImage Replication Operations with Replication Manager
Checking Pair Status
Hosts view
Page 4-38
Hitachi ShadowImage Replication Operations with Replication Manager
Checking Pair Status
Page 4-39
Hitachi ShadowImage Replication Operations with Replication Manager
Changing Pair Status
Change pair status wizard – 2. Select Copy Pairs – Select copy pairs
for status change
Page 4-40
Hitachi ShadowImage Replication Operations with Replication Manager
Changing Pair Status
Page 4-41
Hitachi ShadowImage Replication Operations with Replication Manager
Changing Pair Status
Page 4-42
Hitachi ShadowImage Replication Operations with Replication Manager
Instructor Demonstration
Instructor Demonstration
ShadowImage
• Create pair
• Split pair
• Resync pair
• Delete pair
Instructor
Demonstration
Page 4-43
Hitachi ShadowImage Replication Operations with Replication Manager
Module Summary
Module Summary
Module Review
Page 4-44
5. Hitachi Copy-on-Write Snapshot
Operations with Replication Manager
Module Objectives
Page 5-1
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Hitachi Replication Products
Copy-on-Write Snapshot
Primary Host Secondary Host
• Provides nondisruptive, volume
snapshots
Copy-on-Write Snapshot
Rapidly creates point-in-time snapshot copies of any data volume within Hitachi storage
systems, without impacting host service or performance levels.
Realizes significant savings compared to full cloning methods because these snapshots store
only the changed data blocks in the Copy-on-Write Snapshot storage pool.
Requires substantially smaller storage capacity for each snapshot copy than the source volume.
Page 5-2
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Copy-on-Write Snapshot Purposes
The duplicated volume of the Copy-on-Write Snapshot function consists of physical data stored
in the primary volume and differential data stored in the data pool. This differs from the
ShadowImage function where all the data is retained in the secondary volume.
Although the capacity of the used data pool is smaller than that of the primary volume, a
duplicated volume can be created logically when the snapshot instruction is given. The data
pool can share two or more primary volumes and the differential data of two or more duplicated
volumes.
Capacity used will be subtracted from the license capacity for ShadowImage. Therefore,
you must ensure that the license capacity for ShadowImage is larger than the capacity
to be used by both ShadowImage and Copy-on-Write Snapshot.
Page 5-3
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Copy-on-Write Snapshot Overview
The Copy-on-Write Snapshot configuration includes a P-VOL, a number of V-VOLs and a data
pool (Pool).
• Snapshot Image: A virtual replica volume for the primary volume (V-VOL); this is an
internal volume that is held for restoration purposes.
Page 5-4
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Comparison
Comparison
Warning: Copy-on-Write Snapshot copies only the updated data in a P-VOL to the V-VOL.
Therefore, data in the V-VOL is not guaranteed in the following cases:
The P-VOL and the S-VOL have exactly the same size in ShadowImage Heterogeneous
Replication. In Copy-on-Write Snapshot, less disk space is required for building a V-VOL image
since only part of the V-VOL is on the pool and the rest is still on the primary volume.
Pair Configuration
Page 5-5
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Comparison
1:3 1:64
Page 5-6
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
The capacity of a pool is equal to the total capacity of registered pool-VOLs in the pool. If the
usage rate of the pool exceeds its capacity, the status of the Copy-on-Write Snapshot pair
changes to PSUE (Pair Suspended Error - status when failure occurred).
If this happens, snapshot data cannot be stored in the pool and the Copy-on-Write Snapshot
pair must be deleted. When a Copy-on-Write Snapshot pair is deleted, the snapshot data stored
in the pool is deleted and the P-VOL and V-VOL relationship is released.
Page 5-7
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Copy-on-Write Snapshot Operations
This diagram shows a situation where two snapshots have been taken. The highlighted data block in the
snapshots is available on the primary volume and a request for this block through the V-VOL would be
physically taken from the P-VOL.
This situation will last as long as the corresponding block on the P-VOL is not altered.
Link V01
Pool Monday
Now the data block on the P-VOL needs to be written. However, before the actual write is executed, the
block is copied to the pool area. The set of pointers that actually represent the V-VOL will be updated and
if there is a request now for the original block through a V-VOL, the block is physically taken from the
Pool. From the host's perspective, the V-VOL (snapshot image) has not changed, which was the plan.
Page 5-8
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Copy-on-Write Snapshot Operations
Link
(1) Data for Monday
and Tuesday is already V02 Tuesday
saved on Pool.
Link
V01 Monday
Pool
Physical VOL Virtual VOL (V-VOL)
On Wednesday, another snapshot image has been created. The situation now is that the data
block, as it was before, the write will physically be taken from the pool area and the block as it
is after the write from the primary volume (P-VOL).
If there is a request for that block through a V-VOL, the data will physically be read from the
Pool area or from the P-VOL depending on what snapshot image is being referred to.
One more write to the same data block on the P-VOL. Again, before executing the write, the
block is copied to the pool area, the pointers that make up the V-VOL are updated and upon a
request for that data block through a V-VOL the data will physically be taken from the pool.
Page 5-9
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Copy-on-Write Snapshot Operations
V-VOL02
Restore
P-VOL
Only differential data is copied.
V-VOL03
Restoring a primary volume can be done instantly from any V-VOL. It can be done instantly
because it does not involve immediate moving of data from Pool to P-VOL. Only pointers must
be modified.
The background data will then be copied from the pool to P-VOL.
If the P-VOL becomes physically damaged, all V-VOLs would be destroyed and a restore is not
possible.
Page 5-10
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Setting Up Copy-on-Write
Setting Up Copy-on-Write
Copy-on-Write Operations
Operation steps
• Set up Copy-On-Write pool
• Set up virtual volumes for Copy-On-Write operations
• Manage pairs
Create pairs
Change pair status
Page 5-11
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Setting Up the Data Pool
1. From the Explorer menu, choose Resources and then Storage Systems. The
storage systems subwindow appears.
2. Expand the object tree and then select a storage system under Storage Systems. The
storage-system-name subwindow appears.
4. On the Pools tab select Pool sub tab and click Create Pool. The Create Pool Wizard
starts.
Page 5-12
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Setting Up the Data Pool
Page 5-13
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Setting Up the Data Pool
The new pool usage threshold has to be set higher than the existing pool usage.
Page 5-14
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Creating Virtual Volumes
After creating V-VOLs, it is necessary to assign LUNs to them to create copy pairs. Assignment of
LUNs should be done using Device Manager.
Replication Manager provides a wizard for creating V-VOLs and associating them with volume pools.
The primary volume capacity is used for defining the capacity of the new V-VOLs.
Page 5-15
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Creating Virtual Volumes
In case you do not specify the starting address, the first available address will be assigned. The
address for new volumes will be displayed on successful creation of the V-VOLs.
Page 5-16
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Creating Virtual Volumes
Review the setup information and note the LDEV addresses. You must map these virtual
volumes to the storage port (use Hitachi Device Manager) before using for pair operations.
Page 5-17
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Managing Pairs
Managing Pairs
Launch pair configuration wizard and create pair with copy type “QS/COW/TI.”
Specifying additional settings for pair (like CoW pool ID, CTGID)
Page 5-18
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Instructor Demonstration
Instructor Demonstration
Copy-On-Write
• Create pool/V-VOLs
• Create pair
• Split pair
• Resync pair
Instructor
• Delete pair Demonstration
Page 5-19
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Module Summary
Module Summary
Module Review
Page 5-20
6. Hitachi Thin Image Operations with
Replication Manager
Module Objectives
Page 6-1
Hitachi Thin Image Operations with Replication Manager
What is Hitachi Thin Image?
Hitachi Thin Image snapshot software enables rapid copy creation for immediate use in decision
support, software testing and development and data protection operations.
Page 6-2
Hitachi Thin Image Operations with Replication Manager
Operations
Operations
3. The write completion status is returned to the host after the snapshot data is stored.
Page 6-3
Hitachi Thin Image Operations with Replication Manager
Operations
2. The write completion status is returned to the host before the snapshot data is stored.
Page 6-4
Hitachi Thin Image Operations with Replication Manager
Thin Image Configuration
I/O
Primary Host
P-VOL V-VOL
HTI pool keeps data at 7:00 if data in P-VOL (Snapshot of 7:00)
is modified
HTI Pool
Thin Image stores snapshots, or a duplicate, of your data. You can create up to 1,024
snapshots of data using Thin Image. You can use this data in open-system volumes. If a data
storage failure occurs in your storage system, you can use the snapshot data to restore the
data.
Snapshot data is a copy of updated data in Thin Image P-VOLs. When updating the P-VOL, only
the updated data is copied as snapshot data in pool volumes (pool-VOL) before updating. This
processing is referred to as storing snapshot data. Create Thin Image pairs so that you can
store snapshot data. The P-VOL of a Thin Image pair is a logical volume. The S-VOL of a Thin
Image pair is a V-VOL.
Dynamic Provisioning is required to use Thin Image. Dynamic Provisioning accesses data in pool
volumes by way of V-VOLs, and can handle data in open-system servers such as UNIX and PC
servers. A Dynamic Provisioning license is required. The licensed capacity for Dynamic
Provisioning is calculated based on the capacity of pool-VOLs for Thin Image and Dynamic
Provisioning.
Page 6-5
Hitachi Thin Image Operations with Replication Manager
Comparison — Thin Image and ShadowImage
Pool
Link
Consistent read and read/write access is available only in split states.
Size of Physical Volume: The P-VOL and the S-VOL have exactly the same size in
ShadowImage Replication software. In Thin Image snapshot software, less disk space is
required for building a V-VOL image since only part of the V-VOL is on the pool and the rest is
still on the primary volume.
Pair Configuration: Only one S-VOL can be created for every P-VOL in ShadowImage
Replication software. In Thin Image snapshot software there can be up to 1024 V-VOLs per
primary volume.
Restore: A primary volume can only be restored from the corresponding secondary volume in
ShadowImage Replication software. With Thin Image snapshot software the primary volume
can be restored from any snapshot Image (V-VOL).
Page 6-6
Hitachi Thin Image Operations with Replication Manager
Comparison — Thin Image and ShadowImage
Simple positioning
• Clones should be positioned for data repurposing and data protection (for example, DR
testing) where performance is a primary concern
• Snapshots should be positioned for data protection (for example, backup) only where
space saving is the primary concern
ShadowImage Thin Image
P-VOL = S-VOL P-VOL ≥ V-VOL
SIZE OF PHYSICAL P-VOL = S-VOL P-VOL ≥ V-VOL
VOLUME
1:3/9 1:1024
Page 6-7
Hitachi Thin Image Operations with Replication Manager
Comparison — Copy-on-Write and Thin Image
External
Normal VOL DP VOL
VOL
RAID-5 External External
RAID-5 RAID-1 Mixed
RAID-1 RAID-6 pool pool (V02
RAID-6 Pool pool
Pool (V01) and later)
Copy- Copy-on- Copy-on- Copy-on-
CAW CAW CAW CAW
on-Write Write Write Write
Note: If the cache write pending rate is 60% or more, Thin Image
shifts to Copy-on-Write mode to slow host writes
Page 6-8
Hitachi Thin Image Operations with Replication Manager
Specifications
Specifications
P-VOL
Item Requirement
Volume type LUSE volume can be specified
You cannot specify the following volumes as P-VOLs
• Volumes used for pools
• Volumes used as S-VOLs of Copy-on-Write pairs or Thin
Image pair
Emulation type OPEN-V
Maximum number 16,384
Path definition Required
Maximum capacity 4TB
Note: LUSE P-VOL must be paired with an S-VOL of the same size and structure. For example, if
LUSE P-VOL is created by combining the volumes of 1GB, 2GB and 3GB in this order, you
must specify LUSE volume which has exactly the same size and combination order as
the S-VOL.
V-VOL
Item Requirement
Volume type V-VOL
The following volumes cannot be used as snapshot
S-VOLs
• Volumes used as S-VOLs of Copy-on-Write pairs or Thin Image
pairs
• Volumes used by a pair or migration plan of another product
Emulation type OPEN-V
Maximum number 16,384
Path definition Required
Note: LUSE S-VOL must be paired with a P-VOL of the same size and structure.
Page 6-9
Hitachi Thin Image Operations with Replication Manager
Specifications
Pools
Item Requirement
Pool-VOL capacity 8GB to 4TB
Maximum number of 1,024
pool-VOLs in a data pool
Hitachi Dynamic Provisioning, Thin Image, and Copy-on-Write share 128 pools of
1,024 volumes each
Expansion of data pool Allowed – even if snapshots are using the pool
capacity
Deletion of data pool Allowed – pairs using the pool must be deleted first
Pool usage Copy-on-Write shares P-VOL and pool capacity with ShadowImage.
Thin Image shares P-VOL and pool capacity with Dynamic Provisioning
Notes:
• When internal volumes are used, pool-VOLs with different drive types cannot be used in
the same pool
• When external volumes are used, pool-VOLs with different drive types can be used in
the same pool for best performance, volumes with the same drive types are
recommended for a pool)
• Make sure that pool-VOLs consist of LDEVs from multiple parity groups
The following volumes cannot be specified as pool-VOLs for Copy-on-Write and Thin Image:
• Volumes whose volume status is other than Normal or Normal (Quick Format). If a
volume is being blocked or copied, the volume cannot be specified.
• LUSE volumes
• Volumes that are already being used as Copy-on-Write or Thin Image P-VOLs or S-VOLs
• Volumes used as migration plans or pair volumes for another program product
Page 6-10
Hitachi Thin Image Operations with Replication Manager
Specifications
• Volumes with Protect or Read Only attribute, or the "S-VOL Disable“ attribute setting in
the Data Retention Utility
• System disks
• Command devices
• External pool-VOLs whose cache mode is enabled and external pool-VOLs whose cache
mode is disabled cannot be used in the same data pool
• Volumes that are in different resource groups cannot be used in the same data pool
• CLPR: A data pool cannot contain pool-VOLs that belong to different cache logical
partitions
o CLPRs in the parity group that belong to the pool-VOL cannot be changed
Page 6-11
Hitachi Thin Image Operations with Replication Manager
Thin Image Operations
HTI pair creation and management can be done within HCS, using HRpM and
HDvM
Pair Pool
Management Management
HRpM HRpM
Create a V-VOL Create a HTI Pool
HDvM
Allocate V-VOL to the host
HRpM
Create a HTI pair (Select HTI Pool in Edit Task Window)
HRpM HRpM
Monitor Pair Monitor HTI Pool Usage
HRpM HRpM
Manage Pair, Change Pair Status Expand the Pool
Page 6-12
Hitachi Thin Image Operations with Replication Manager
Module Summary
Module Summary
Page 6-13
Hitachi Thin Image Operations with Replication Manager
Module Review
Module Review
Page 6-14
7. Hitachi TrueCopy Operations with
Replication Manager
Module Objectives
Page 7-1
Hitachi TrueCopy Operations with Replication Manager
Hitachi TrueCopy Benefits
Provides vital recovery management capabilities that safeguard information up to the point of
an outage in the event a primary site is damaged. As a result, TrueCopy synchronous helps
minimize the impact of business downtime. Data recovery processes traditionally span several
days and involve many administrators. Using a service-based implementation, TrueCopy
replaces these manual and time-consuming methods with automated copy processes.
Consequently, recovery time is significantly reduced, enabling normal business operations to
resume in a matter of minutes, not days.
Provides host-independent, vital recovery management capabilities that minimize the impact of
downtime and ensure nonstop access to your information in the event of a disaster or during
scheduled downtime. TrueCopy replicates data locally between Hitachi storage systems—within
the same data center or remotely, between dispersed locations.
Is a remote data replication solution for both mainframe and open systems environments.
TrueCopy synchronous replication is accomplished by continuously sending data copies between
one or more primary Hitachi storage systems to one or more secondary systems, located either
in the same data center or at a remote site.
TrueCopy synchronous is typically used for applications with the most stringent recovery-point
objectives where loss of data cannot be tolerated. TrueCopy synchronous is normally deployed
over distances of less than 100 kilometers (~60 miles), although it may be possible to extend
that distance to 300 kilometers (~190 miles).
Page 7-2
Hitachi TrueCopy Operations with Replication Manager
Hitachi TrueCopy Benefits
Business continuity
• Remote replication with potential for zero data loss for synchronous distances
• Reduce frequency and duration of planned outages
• Disaster recovery testing
• Rapid recovery versus tape restore
• Government regulation compliance
Business Continuity
• Replaces slow, labor-intensive, and expensive tape-based replication and retrievals with
rapid, automated processes.
• Performs backups on a secondary system while your business operates at full capacity.
Page 7-3
Hitachi TrueCopy Operations with Replication Manager
Hitachi TrueCopy Synchronous Benefits
LU = logical unit
Page 7-4
Hitachi TrueCopy Operations with Replication Manager
Remote Replication Solutions
TrueCopy synchronous
• Host writes to remote cache, thus automatically maintaining data consistency
• Optional consistency groups (from CCI) to get time consistent split
• Remote delay inserted at end of channel program by microcode. Only one turnaround required
• Exclude temporary data
• Turnaround times become unacceptable at longer distances
• Appreciable delay at zero distance:
500 to 700 µsec delay within control unit
Total around 900 µsec at zero distance
• RPO near zero, RTO near zero
Page 7-5
Hitachi TrueCopy Operations with Replication Manager
Internal Operations of Synchronous Replication
Primary Secondary
Host Host
Issues Benefits
1. Performance impact (waiting for the 1. Highest degree of data currency
acknowledgement) 2. No loss of committed data
2. Distance limitation (up to 300 KM) 3. Fast data recovery (DR)
3. Database management system dependent writes (keeping
4. Simpler to configure versus asynchronous
the database in-sync with the log file)
This type of remote copy solution is implemented entirely in the Hitachi enterprise storage
systems microprogram and hardware. It is transparent to the host applications.
Operation
• The primary host server issues a write I/O to the primary control unit.
• The primary control unit sends the data to the secondary control unit.
• The secondary control unit sends an acknowledgement back to the primary control unit.
• The primary control unit sends status end to the host server. The application continues
execution.
Pros
• No data loss.
Page 7-6
Hitachi TrueCopy Operations with Replication Manager
Internal Operations of Synchronous Replication
Cons
• Short distances.
• This type of solution provides a guarantee that each transaction was written to disk, but
the performance of the user’s application may decrease as the distance to the remote
device increases. Depending upon the application, this could impair those that are
transaction intense. Since the user application waits for a confirmation that each
transaction has been written to disk, as the distance to the storage device increases, the
time to receive a response also increases.
• The risk in this situation is that although the confidence factor is high that the remote
storage device has data that is consistent, it is probable that the number of transactions
that can be processed per second will decrease. The rate of decrease will depend
directly on the distance to the remote location and how fast the device can turn around
an acknowledgement.
Page 7-7
Hitachi TrueCopy Operations with Replication Manager
Remote Replication Configurations
Hitachi offers powerful solutions for business continuity as well as for disaster recovery testing
while maintaining complete disaster recovery protection. The example above illustrates a typical
configuration that uses TrueCopy to maintain an exact, I/O consistent data volume at the
remote site. It also uses Hitachi ShadowImage Heterogeneous Replication to create a separate
copy for disaster recovery testing. This enables the organization to test its disaster recovery
plan with current data.
Sync S-VOL
P-VOL P-VOL
Note that TrueCopy Synchronous has no inherent grouping capability. Pair operations occur
individually.
Page 7-8
Hitachi TrueCopy Operations with Replication Manager
TrueCopy and ShadowImage Together
ShadowImage
TrueCopy S-VOL TrueCopy
P-VOL S-VOL ShadowImage
-------------------- --------------------
ShadowImage S-VOL
ShadowImage ShadowImage
P-VOL S-VOL P-VOL
Disaster recovery
• Test your remote disaster recovery plan nondisruptively with current
production data
Local Data Center Remote Copy Remote Site
TrueCopy ShadowImage
P-VOL T-VOL TrueCopy
S-VOL ShadowImage
--------------------
ShadowImage -------------------- S-VOL
P-VOL ShadowImage ShadowImage
P-VOL
S-VOL
Page 7-9
Hitachi TrueCopy Operations with Replication Manager
TrueCopy and ShadowImage Together
TrueCopy
TrueCopy S-VOL
P-VOL ShadowImage
--------------------
ShadowImage S-VOL
P-VOL
Decision Support
Page 7-10
Hitachi TrueCopy Operations with Replication Manager
TrueCopy Specifications
TrueCopy Specifications
Not dependent on RAID levels and HDD types (different levels or types
can coexist)
TrueCopy defines the MCU and RCU relationship either at the CU level or CU free. If using CU
level, each LCU must have a unique four digit identifier called the storage system ID (SSID).
The SSIDs are assigned at installation.
Configure TrueCopy
Links
Add RCU
Create Pairs
Page 7-11
Hitachi TrueCopy Operations with Replication Manager
Fibre Channel Links
TrueCopy over Fibre Channel can be configured for the following types of connections:
• Direct connection
Two systems are directly connected
• Switch connection
Two systems are connected using switches
Initiator ports cannot be connected to a host. Hard-zone switches can be added, if necessary, to prevent the
hosts from accessing initiator ports
While RCU-target ports can be connected to hosts via a Fibre Channel switch, it is not recommended
Other restrictions:
• LUNs cannot be mapped to initiator ports
• The topology for the Initiator and RCU target ports must be the same
For example, if the initiator port is set for Fabric=OFF, Fibre Channel Arbitrated Loop (FC-AL), then the RCU target
should also be set to FABRIC=OFF, FC-AL
The major components of a TrueCopy operation using Fibre Channel interface connections are:
• TrueCopy running on the storage system at the primary (production) site and on the
storage system at the secondary (recovery) site
Page 7-12
Hitachi TrueCopy Operations with Replication Manager
Fibre Channel Links
o P-VOLs (production volumes) at primary site. The MCUs contain the P-VOLs,
containing the original data, and are online to hosts.
o S-VOLs (secondary volumes) at secondary site. The RCUs contain the S-VOLs,
which are the synchronous and asynchronous copies of the P-VOLs. S-VOLs can
be online to hosts only when pairs are SPLIT, SUSPENDED, or DELETED.
TrueCopy Synchronous supports one to N and N to one remote copy connections (N is less than
or equal to 4). One MCU can be connected to as many as four RCUs, and one RCU can be
connected to as many as four MCUs.
Note: Hitachi Data Systems strongly recommends that you establish at least two independent
remote copy connections (one per cluster) between each MCU and RCU to provide
hardware redundancy.
Page 7-13
Hitachi TrueCopy Operations with Replication Manager
Pair Operations
Pair Operations
Initial Copy
P-VOL S-VOL
Operation
TrueCopy synchronous operations involve the primary (main) systems and the secondary
(remote) systems (MCUs and RCUs). The MCUs contain the TrueCopy primary volumes (P-
VOLs), which contain the original data and are online to the host(s). The RCUs contain the
TrueCopy synchronous secondary volumes (S-VOLs), which are the synchronous copies of the
P-VOLs.
• Initial Copy: All the content from the primary volume is copied to the secondary
volume. As soon as the initial copy starts, the secondary volume becomes unavailable
for any type of host I/O.
• Update Copy
Page 7-14
Hitachi TrueCopy Operations with Replication Manager
Differential Bitmap Function
The differential data (updated by write I/Os during split or suspension) between the primary
data volume and the secondary data volume is stored in each bitmap. When a split/suspended
pair is resumed (pairresync), the primary storage system merges the primary data volume and
secondary data volume bitmaps, and the differential data is copied to the secondary data
volume.
P-VOL | X | X | X | | | X | | X | | | X | | | | . . . .
S-VOL | X | X | X | | | X | | X | | | X | | | | . . . .
This diagram illustrates the usage of differential bitmaps, after a paircreate command is issued.
While Initial Copy is operating, updates can occur to primary volumes. The P-VOL bitmaps
denote any changed tracks/cylinders that occur. Differential bitmapping for Universal Replicator,
TrueCopy Remote Replication, ShadowImage Replication, Hitachi Copy-on-Write Snapshot, and
Hitachi Thin Image work in exactly the same way.
Page 7-15
Hitachi TrueCopy Operations with Replication Manager
Differential Bitmap Function
Pairsplit command
• While pairs are split, both P-VOL and S-VOL differential bitmaps may be
active
• Changed tracks on P-VOL
P-VOL | X | | X | | | | X | | X | | | | | | | | . . . .
S-VOL | X | X | | | | | | | | | | | X | | | . . . .
This diagram illustrates the usage of differential bitmaps after a pair is suspended. While
suspended, updates can occur to the P-VOL. Changes can also occur on S-VOL if it is mounted
as Write Enabled. The bitmaps denote any changed tracks/cylinders while the pair is suspended.
Resynchronization
• Changes that have occurred while pair is split
• Merge
P-VOL |X| |X|X| | |X| |X| |X| | | | |. . . .
This diagram illustrates the usage of differential bitmaps after a pair is resynchronized.
While suspended, updates may occur to both primary and secondary volumes. The bitmaps
denote any changed tracks/cylinders while the pair is suspended. When the resync command
is issued, the S-VOL differential bitmaps are merged into the P-VOL differential bitmaps. Then
all of the changed data are copied from the P-VOL to the S-VOL. This process results in
overwrites for any changed S-VOL data.
Page 7-16
Hitachi TrueCopy Operations with Replication Manager
Advanced Pair Operations and Recovery Scenarios
Takeback
• When the damaged primary site is recovered, the takeback operation is used
to immediately switch operations from the secondary site back to the primary
site
In addition to the basic pair operations (such as split and resync), the Change Pair Status
Wizard supports several advanced operations for open system pairs. The relationship between
the basic and advanced operations can be understood in terms of two scenarios:
Page 7-17
Hitachi TrueCopy Operations with Replication Manager
Hitachi Open Remote Copy (HORC) Takeover Support
Split the pair and makes S-VOLs available for application use.
(HRpM issues “horctakeover –S” command)
Fence level (data or status) option can be specified when creating pairs. This option is used for
guaranteeing the same data on P-VOL and S-VOL even when a failure has occurred.
If this option was enabled, application I/O to P-VOL returns error if storage system fails writing
to S-VOLs.
Page 7-18
Hitachi TrueCopy Operations with Replication Manager
Hitachi Open Remote Copy (HORC) Takeover Support
Swap operation performs two steps (suspend and reverse). If P-VOLs have any issues, this
operation fails at first suspend step and keeps the pair status as PAIR.
In maintenance scenario, keeping the PAIR status is preferable rather than forcefully making S-
VOLs available to applications. (takeover operation forcefully split pairs when there is an error
on P-VOL for making S-VOLs available).
Page 7-19
Hitachi TrueCopy Operations with Replication Manager
Hitachi Open Remote Copy (HORC) Takeover Support
Page 7-20
Hitachi TrueCopy Operations with Replication Manager
Hitachi Open Remote Copy (HORC) Takeover Support
Advanced
Operations
Advanced option for performing operations (takeover, swap, force-split and takeover-recovery).
Existing pair operations (split, resync, etc.) are categorized in Basic operations.
Page 7-21
Hitachi TrueCopy Operations with Replication Manager
Hitachi Open Remote Copy (HORC) Takeover Support
Tasks view
The result of Change Pair Status Wizard operation is registered as task. User can confirm
the status on the task view.
After task is completed, user can confirm the updated configuration or status on the copy group
window.
Additional notes
• If copy direction is reversed by HORC takeover operations, the following
settings for the original copy groups become unavailable:
Alert
My Copy Group
Scheduled execution of pairs
• If the above settings are required while the copy directions are reversed,
please reconfigure these settings
When the reversed copy direction is brought back to the original, the
above settings for original copy groups become available again
Page 7-22
Hitachi TrueCopy Operations with Replication Manager
Pair Operations
Pair Operations
P-VOL fence level (sync only): Select the fence level for the new pairs. Default – Never. The
fence level determines the conditions under which the MCU will reject write operations to the P-
VOL.
Page 7-23
Hitachi TrueCopy Operations with Replication Manager
Pair Operations
Select the initial copy options for the new pairs. These options cannot be changed after a pair
has been added:
• Initial Copy:
o Default = Entire
WARNING: The user must ensure that the P-VOL and S-VOL are already identical
when using the No Copy setting.
• Initial Copy Pace: Desired number of tracks to be copied at one time (1-15) during
the initial copy operation. Default = 15.
Page 7-24
Hitachi TrueCopy Operations with Replication Manager
TrueCopy Operations
TrueCopy Operations
Page 7-25
Hitachi TrueCopy Operations with Replication Manager
Setting Up Remote Paths
Remote Paths are port connections between local and remote storage systems. These logical
routes are used by remote copy pairs for copying data from a P-VOL to an S-VOL.
Replication Manager allows remote path configuration for different replication technologies. You
must set up a remote path before you can use any of the following volume replication functions:
• TrueCopy
o For enterprise-class storage systems: Based on the copy direction, you specify
the port for the local storage system CU (MCU) and the port for the remote
storage system CU (RCU). Initiator and RCU target are set automatically as the
attributes of the specified ports.
o You can specify either CU free (recommended) (to connect only from the local
storage system to a remote storage system via a dynamically assigned MCU-RCU
pair) or CU specific (to connect each path via a specified MCU and RCU).
• Universal Replicator
o Using CU free, you can specify the port for the local storage system and the port
for the remote storage system. You must set paths for both directions. Initiator
and RCU target are set automatically as the attributes of the specified ports.
Page 7-26
Hitachi TrueCopy Operations with Replication Manager
Setting Up Remote Paths
Select reverse direction path checkbox should be checked only if reverse links are set up
between the two sites.
Page 7-27
Hitachi TrueCopy Operations with Replication Manager
Setting Up Remote Paths
Remote paths
Page 7-28
Hitachi TrueCopy Operations with Replication Manager
Managing Pairs
Managing Pairs
Page 7-29
Hitachi TrueCopy Operations with Replication Manager
Managing Pairs
Page 7-30
Hitachi TrueCopy Operations with Replication Manager
Managing Pairs
Advanced
Operations
Page 7-31
Hitachi TrueCopy Operations with Replication Manager
Instructor Demonstration
Instructor Demonstration
Hitachi TrueCopy
• Set up remote path
• Create pair
• Split pair
• Resync pair
Instructor
• Takeover Demonstration
Page 7-32
Hitachi TrueCopy Operations with Replication Manager
Module Summary
Module Summary
Page 7-33
Hitachi TrueCopy Operations with Replication Manager
Module Review
Module Review
Page 7-34
8. Hitachi Universal Replicator Operations
with Replication Manager
Module Objectives
Page 8-1
Hitachi Universal Replicator Operations with Replication Manager
Hitachi Universal Replicator Overview
Hitachi Universal Replicator delivers simplified asynchronous data replication solution for
enterprise storage. Universal Replicator is designed for organizations with demanding
heterogeneous data replication needs for business continuity or improved IT operations. HUR
delivers enterprise-class performance associated with storage system-based replication while
providing resilient business continuity without the need for redundant servers or replication
appliances.
Page 8-2
Hitachi Universal Replicator Operations with Replication Manager
Hitachi Universal Replicator Overview
HUR benefits
• Ensures business continuity
• Optimizes resource use (lowers the cache and resource consumption on
production and primary storage systems)
• Improves bandwidth utilization and simplifies bandwidth planning
• Improves operational efficiency and resiliency (mitigates the impact of link
failures between sites)
• Provides more flexibility in trading off between Recovery Point Objective and
cost
• Implements advanced multi-data center support more easily
• Moves data among levels of tiered storage systems more easily
3. Asynchronous
1. Write I/O remote copy
P-VOL JNL-VOL
JNL-VOL
Primary host 2. Write complete 4. Remote copy complete S-VOL
The Host I/O process completes immediately after storing write data to the cache memory of
primary storage system main disk control unit (MCU). Then the data is asynchronously copied
to the secondary storage system remote disk control unit (RCU).
MCU will store data to be transferred in the journal cache, to be destaged to journal volume in
the event of link failure.
Universal Replicator provides consistency of copied data by maintaining write order in copy
process. To achieve this, HUR attaches write order information to the data in copy process.
Page 8-3
Hitachi Universal Replicator Operations with Replication Manager
Hitachi Universal Replicator Hardware
• Since CHA ports are configured in pairs, a total of 8 CHA ports will be reserved
• Initiator > RCU target – Two initiator ports on each system for redundancy
• RCU target > initiator – Two RCU target ports on each system for redundancy
Page 8-4
Hitachi Universal Replicator Operations with Replication Manager
Hitachi Universal Replicator Components
Journal group
• Consists of data volumes and journal volumes
• Maintains volume consistency by operating on multiple data volumes with one command
• Master journal group in the MCU contains P-VOLs and master journal volumes
• Restore journal group in the RCU contains S-VOLs and restore journal volumes
Journal volumes
• Stores differential data
Page 8-5
Hitachi Universal Replicator Operations with Replication Manager
Hitachi Universal Replicator Specifications
Page 8-6
Hitachi Universal Replicator Operations with Replication Manager
Hitachi Universal Replicator Usage
Page 8-7
Hitachi Universal Replicator Operations with Replication Manager
Base Journal (Initial Copy)
Secondary host
Primary host Pointers to data volume are
stored in journal volume
Host IO Write sequence number assigned
inside metadata
Restore
Restore
P-VOL Journal
Master Volume S-VOL
Journal Initial Copy Process:
Obtaining base Volume Data is sent directly from the
Primary Subsystem P-VOL to the Restore Journal Secondary Subsystem
Volume
Upon initiation or the paircreate, the primary site stores pointers to the data in the P-VOL
(primary data volume) as a base journal
• For the Base Journal, only metadata is stored in the journal volume
• The data in the P-VOL is not copied to the Master Journal Volume
The base journal data is obtained by the RCU repeatedly sending read commands to the MCU.
The data in the secondary data volume synchronizes with the data in the primary data volume
via pointers. This operation is the same as Initial Copy in TrueCopy Remote Replication. Initial
Copy is complete when the MCU informs the RCU that the highest sequence number has been
sent.
Page 8-8
Hitachi Universal Replicator Operations with Replication Manager
Update Journal (Update Copy)
Obtaining updated
journal data MCU sends Update copy Restore
Journal copy is the function to copy the data in the primary journal volumes (M-JNL) in the MCU
to the secondary journal volumes (R-JNL) in the secondary site. The secondary storage system
issues the read journal command to the primary storage system to request the transfer of the
journal data that is stored in the primary journal volumes after the pair create or pair resync
operation from the MCU. The MCU transfers the journal data in the journal volumes to the RCU
if it has the journal data that has been not sent. If primary storage system does not have the
journal data, the information (indicating that MCU does not have such journal data) is sent. The
RCU stores the journal volume data that is transferred from the MCU in the secondary journal
volumes in RCU. The read journal commands are issued repeatedly and regularly from the RCU
to the MCU. After the data are restored, the highest journal sequence number is informed from
the RCU to the MCU when the read journal command is issued. According to this information,
the journal data is discarded in the MCU.
Page 8-9
Hitachi Universal Replicator Operations with Replication Manager
Journal Restore
Journal Restore
sequence
Secondary Subsystem
Journal restore is the function of copying the data in the restore/secondary journal volume to
the S-VOL at the secondary site. The data in the restore/secondary journal volume is copied to
the secondary data volume according to the write sequence number. This ensures the write
sequence consistency between the primary and secondary data volumes. After the journal data
is restored to the secondary data volume, the journal data is discarded at the secondary site.
Allowable configurations
Universal Storage Universal Storage Universal Storage Universal Storage Universal Universal
Platform Platform Platform MCU/RCU Platform Storage Storage
MCU/RCU MCU/RCU MCU/RCU Platform (MCU) Platform
(RCU)
S-VOL
Page 8-10
Hitachi Universal Replicator Operations with Replication Manager
Three Data Center Configuration
TrueCopy Synchronous
Three data center strategies combine in-region and out-of-region replication to provide the
strongest protection: fast recovery and data currency for local site failures, combined with good
protection from regional disasters. However, multiple data centers and data copies increase
costs, so robust 3DC strategies have typically been limited to large organizations with extremely
critical business continuity needs.
The above figure illustrates a 3DC multi-target configuration, in which data is replicated to two
remote sites in parallel. TrueCopy synchronous replication maintains a current copy of the
production data at an in-region recovery data center. At the same time, the Universal Storage
Platform at the primary site replicates the data to an out-of-region recovery site, using Universal
Replicator asynchronous replication across a separate replication network.
In case of production site failure, processing can resume at the in-region recovery site, using a
current TrueCopy replica of production data. The in-region hot site can also support planned
failover when needed for maintenance, upgrades, or business continuity testing. Meanwhile,
Universal Replicator provides ongoing replication to the out-of-region site, maintaining robust
business continuity protection. In case of a regional disaster, the out-of-region data center can
recover rapidly with a slightly older but fully consistent copy of production data.
Page 8-11
Hitachi Universal Replicator Operations with Replication Manager
Three Data Center Configuration
Master Restore
Journal Journal
The above figure illustrates a 3DC cascade configuration that uses synchronous TrueCopy
Remote Replication to maintain a current copy of the production data at an in-region data
center. As noted earlier, 3DC cascade configurations make sense when the in-region hot site
provides processing capabilities for recovery.
The storage system at the in-region site also cascades the data to an out-of-region recovery
site, using Universal Replicator asynchronous replication. In comparison with other
asynchronous replication technologies, Universal Replicator does not require an additional point-
in-time copy of the data volume at the intermediate site. Universal Replicator stages the data to
the journal disk, which is relatively small compared with a complete data copy. This feature
saves physical disk space and reduces the cost of the 3DC configuration.
Page 8-12
Hitachi Universal Replicator Operations with Replication Manager
Hitachi Universal Replicator Operations
Managing pairs
Page 8-13
Hitachi Universal Replicator Operations with Replication Manager
Setting Up Remote Paths
Remote paths are port connections between local and remote storage systems. These logical routes
are used by remote copy pairs for copying data from a P-VOL to an S-VOL.
Replication Manager allows remote path configuration for different replication technologies. You
must set up a remote path before you can use any of the following volume replication functions:
• TrueCopy
o For enterprise-class storage systems: Based on the copy direction, you specify the
port for the local storage system CU (MCU) and the port for the remote storage
system CU (RCU)
o Initiator and RCU target are set automatically as the attributes of the specified ports
o You can specify either CU free (recommended) (to connect only from the local
storage system to a remote storage system via a dynamically assigned MCU-RCU
pair) or CU specific (to connect each path via a specified MCU and RCU)
• Universal Replicator
o Using CU free, you can specify the port for the local storage system and the port for
the remote storage system
o Initiator and RCU target are set automatically as the attributes of the specified ports
Page 8-14
Hitachi Universal Replicator Operations with Replication Manager
Setting Up Remote Paths
Page 8-15
Hitachi Universal Replicator Operations with Replication Manager
Setting Up Journal Groups
Journal groups are used to keep the journal data for asynchronous data transfer and must be
set up before creating Universal Replicator volume pairs. Journal groups must be set in each
storage system on both the primary and secondary site.
Universal Replicator uses journal volumes as volume copy buffers. You must set up journal
groups before creating Universal Replicator volume pairs. Journal groups are used to keep the
journal data for asynchronous data transfer. Journal groups must be set in each storage system
on both the primary and secondary site. The journal volume for the primary site and the
primary volume, and the journal volume for the secondary site and the secondary volume, are
defined as journal groups.
Page 8-16
Hitachi Universal Replicator Operations with Replication Manager
Setting Up Journal Groups
Page 8-17
Hitachi Universal Replicator Operations with Replication Manager
Setting Up Journal Groups
Inflow control: Allows you to specify whether to restrict inflow of update I/Os to the journal
volume (in other words, whether to delay response to the hosts)
Note: If Yes is selected and the metadata or the journal data is full, the update I/Os may stop
(Journal Groups suspended).
Data overflow watch: Allows you to specify the time (in seconds) for monitoring whether
metadata and journal data are full; this value must be within the range of 0 to 600 seconds
Note: If Inflow Control is No, Data Overflow Watch does not take effect and does not
display anything.
Path Watch Time: Allows you to specify the interval from when a path gets blocked to when a
mirror gets split (suspended); This value must be within the range of 1 to 60 minutes
Note: Make sure that the same interval is set to both the master and restore journal groups in
the same mirror, unless otherwise required. If the interval differs between the master
and restore journal groups, these journal groups will not be suspended simultaneously.
For example, if the interval for the master journal group is 5 minutes and the interval for
the restore journal group is 60 minutes, the master journal group will be suspended in 5
minutes after a path gets blocked, and the restore journal group will be suspended in 60
minutes after a path gets blocked.
Page 8-18
Hitachi Universal Replicator Operations with Replication Manager
Setting Up Journal Groups
Caution: By default, the factory enables (turns ON) SVP mode 449, disabling the path watch
time option. If you’d like to enable the path watch time option, please disable mode
449 (turn it OFF).
Note: If you want to split a mirror (suspend) immediately after a path becomes blocked, please
disable SVP modes 448 and 449 (turn OFF).
Forward path watch time: Allows you to specify whether to forward the Path Watch Time
value of the master journal group to the restore journal group. If the Path Watch Time value is
forwarded, the two journal groups will have the same Path Watch Time value.
• Yes: The Path Watch Time value will be forwarded to the restore journal group
• No: The Path Watch Time value will not be forwarded to the restore journal group; No
is the default
• Blank: The current setting of Forward Path Watch Time will remain unchanged
Use of Cache: Allows you to specify whether to store journal data in the restore journal group
into the cache
Note: When there is insufficient space in the cache, journal data will also be stored into
the journal volume
• Not Use: Journal data will not be stored into the cache
Caution: This setting does not take effect on master journal groups. However, if the
horctakeover option is used to change a master journal group into a restore
journal, this setting will take effect on the journal group.
Speed of Line: Allows you to specify the line speed of data transfer; The unit is Mb/sec
(megabits per second)
Caution: This setting does not take effect on master journal groups. However, if the
horctakeover option is used to change a master journal group into a restore journal
group, this setting will take effect on the journal group.
Delta resync Failure: Allows you to specify the processing that would take place when delta
resync operation cannot be performed
Page 8-19
Hitachi Universal Replicator Operations with Replication Manager
Setting Up Journal Groups
• Entire: All the data in primary data volume will be copied to remote data volume when
delta resync operation cannot be performed; The default is Entire
• None: No processing will take place when delta resync operation cannot be performed
o If Delta Resync pairs are desired, they will have to created manually
Page 8-20
Hitachi Universal Replicator Operations with Replication Manager
Setting Up Journal Groups
Confirming settings
Page 8-21
Hitachi Universal Replicator Operations with Replication Manager
Managing Pairs
Managing Pairs
Page 8-22
Hitachi Universal Replicator Operations with Replication Manager
Managing Pairs Continued
Managing Pairs
Page 8-23
Hitachi Universal Replicator Operations with Replication Manager
Instructor Demonstration
Instructor Demonstration
Page 8-24
Hitachi Universal Replicator Operations with Replication Manager
Module Summary
Module Summary
Page 8-25
Hitachi Universal Replicator Operations with Replication Manager
Module Review
Module Review
Page 8-26
9. Hitachi Replication Manager Monitoring
Operations
Module Objectives
Page 9-1
Hitachi Replication Manager Monitoring Operations
Monitoring Copy Operations
Alerts can be generated when a monitored target, such as a copy pair or buffer,
satisfies a preset condition
Alert notifications are useful for enabling a quick response to a hardware failure or for
determining the cause of a degradation in transfer performance. They are also useful for
preventing errors due to buffer overflow and insufficient copy licenses, thereby facilitating the
continuity of normal operation. Because you can receive alerts by email or SNMP traps, you can
also monitor the replication environment while you are logged out of Replication Manager.
Page 9-2
Hitachi Replication Manager Monitoring Operations
Monitoring Copy Operations
You can monitor copy pair configurations in multiple ways using Replication Manager. You can
use a tree view to check the configuration definition file for CCI that is created by Replication
Manager or other products, or to check the copy group definition file for Business Continuity
Manager or Mainframe Agent. You can limit the range of copy pairs being monitored to those of
a host or storage system, and also check the configuration of related copy pairs. You can also
check copy pair configurations from a copy group perspective.
You can configure pair status monitoring for hosts, storage systems, copy groups or copy pairs
to detect an unexpected pair status. When a pair status for which you require notification is
detected, Replication Manager can be configured to alert you with an email message or an
SNMP trap. Replication Manager also detects pair statuses based on the periodic monitoring of
pair statuses.
Page 9-3
Hitachi Replication Manager Monitoring Operations
Monitoring Copy Operations
You can configure threshold monitoring for asynchronous remote copy metrics to detect an
unexpected overflow of preset thresholds. You can display the transfer delay state between the
primary and secondary volumes for each copy group. This feature of Replication Manager is
used to monitor asynchronous remote copying by using Hitachi TrueCopy Extended Distance,
and Hitachi Universal Replicator. The transfer delay state of remote copies displays these types
of information:
• Usage of side file/journal
• Write delay time (C/T delta)
• Usage rate of pool capacity
• You can monitor the progress of replica creation using the summary displayed in the
Applications and Servers subwindows
• You can receive notification through email or SNMP traps on replica monitoring
parameters
Page 9-4
Hitachi Replication Manager Monitoring Operations
Monitoring Copy Operations
You can monitor the usage ratio of buffers (pools and journal groups) and receive alert
notification. You can get notification by way of email or SNMP traps based on the predefined
thresholds. If you are an administrator, you can add volumes to the buffers using Replication
Manager.
You can monitor the used capacity and copy license usage percentage for each copy product in
complex replication environments. You can configure alerts to send notifications when copy
license usage reaches a particular threshold or the licensed capacity has been reached.
Page 9-5
Hitachi Replication Manager Monitoring Operations
Setting Up Alerts
Setting Up Alerts
Page 9-6
Hitachi Replication Manager Monitoring Operations
Setting Up Alerts
Page 9-7
Hitachi Replication Manager Monitoring Operations
Create Alert Setting Wizard
Page 9-8
Hitachi Replication Manager Monitoring Operations
Create Alert Setting Wizard
Page 9-9
Hitachi Replication Manager Monitoring Operations
Create Alert Setting Wizard
Page 9-10
Hitachi Replication Manager Monitoring Operations
Alert Status
Alert Status
Page 9-11
Hitachi Replication Manager Monitoring Operations
Alert Status
Page 9-12
Hitachi Replication Manager Monitoring Operations
Instructor Demonstration
Instructor Demonstration
Monitoring options
• Dashboard
• Copy pair status
• License status
• Alerts
Instructor
Demonstration
Page 9-13
Hitachi Replication Manager Monitoring Operations
Module Summary
Module Summary
Module Review
Page 9-14
10. Application Replicas
Module Objectives
Page 10-1
Application Replicas
Application Replicas
Application Replicas
As with copy pair management, the creation and management of application replicas is
organized around tasks and storage assets.
Page 10-2
Application Replicas
Application Backup and Restore Features
Simpler setup
• Simple deployment – Application agent
Installer deploys the required components for replica management
Can be easily downloaded from HCS GUI to application servers
• Simple agent setup: HRpM hides complex parameters which users normally
do not need to know about
Consolidated management
• Multiple server management: HRpM allows user to manage multiple servers
in a single point of view
• Integration with pair management: Pair Management button easily navigates
users to required pair configurations
Page 10-3
Application Replicas
Application Backup and Restore Features
Enhanced monitoring
• Data protection status: Intuitive icon shows the summary status, so user can
easily identify the possible issues
• Email notification: Errors can be notified by email for immediate actions
Protection Status for Hosts and
Storage Groups / Instances
Page 10-4
Application Replicas
Components
Components
Servers
Web Client HRpM Server
• Storage management server: Provides management interface
HDvM Server
• Application server
Mailbox server of MS-Exchange
Database server of MS-SQL Server
• Backup/import server: Server which mounts S-VOL Application Agent Application Agent
CCI CCI
Software
• Application agent: Executes replica operations by communicating with Application
Application Agent
CCI
(Port) 24041
HDvM Agent
Pair Operations
(Pair Configuration Wizard, Change Pair Status Wizard)
Replica Operations and Agent Settings
(Create Replica Wizard, Restore Replica Wizard, Setup Agent Dialog)
Page 10-5
Application Replicas
System Configuration for Remote Copy
HRpM application agent also supports creating a replica to the remote site
• Remote copy support:
Hitachi TrueCopy synchronous — MS-SQL Server/MS-Exchange
Hitachi Universal Replicator — MS-SQL Server
• Import server is required on remote site (MS-Exchange Only)
Web Client HRpM Svr
HDvM Svr HDvM Svr
Application Agent
Application Agent
CCI
CCI
TrueCopy
Sync
HDvM server on remote site is not mandatory for replica operation, though it is required for performing pair configuration on remote site.
Features:
• Discovering application agent
• Creating replica
• Restoring replica
• Mounting replica
Page 10-6
Application Replicas
Discovering Application Agent
Page 10-7
Application Replicas
Creating Replicas
Creating Replicas
Page 10-8
Application Replicas
Create Replica Wizard
The result of the Create Replica Wizard is registered as task and Task View
provides the detail of the task execution
Restoring Replicas
Replica History tab shows the list of created replicas. 2009/10/15 02:00
2009/10/16 2:00
2009/10/17 02:00
Page 10-9
Application Replicas
Restoring Replica
Restoring Replica
Page 10-10
Application Replicas
Module Summary
Module Summary
Module Review
Page 10-11
Application Replicas
Your Next Steps
@HDSAcademy
Check your progress in the Learning Path.
Certification: http://www.hds.com/services/education/certification
Learning Paths:
http://www.twitter.com/HDSAcademy
Page 10-12
Communicating in a Virtual Classroom:
Tools and Features
Virtual Classroom Basics
This section covers the basic functions available when communicating in a virtual classroom.
Chat
Q&A
Feedback Options
• Raise Hand
• Yes/No
• Emoticons
Markup Tools
• Drawing Tools
• Text Tool
Page V-1
Communicating in a Virtual Classroom: Tools and Features
Reminders: Intercall Call-Back Teleconference
Page V-2
Communicating in a Virtual Classroom: Tools and Features
Feedback Features — Try Them
Page V-3
Communicating in a Virtual Classroom: Tools and Features
Intercall (WebEx) Technical Support
Call 800.374.1852
Page V-4
Communicating in a Virtual Classroom: Tools and Features
WebEx Hands-On Lab Operations
After connecting to lab computer, learners see a message asking them to disconnect and
connect to the new teleconference
• Click Yes You do not need to hang
up and dial a new number.
Intercall auto connects you
to the lab conference.
Page V-5
Communicating in a Virtual Classroom: Tools and Features
WebEx Hands-On Lab Operations
Page V-6
Training Course Glossary
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
ACC — Action Code. A SIM (System Information AL-PA — Arbitrated Loop Physical Address.
Message). AMS — Adaptable Modular Storage.
ACE — Access Control Entry. Stores access rights APAR — Authorized Program Analysis Reports.
for a single user or group within the APF — Authorized Program Facility. In IBM z/OS
Windows security model. and OS/390 environments, a facility that
ACL — Access Control List. Stores a set of ACEs permits the identification of programs that
so that it describes the complete set of access are authorized to use restricted functions.
rights for a file system object within the API — Application Programming Interface.
Microsoft Windows security model.
APID — Application Identification. An ID to
ACP ― Array Control Processor. Microprocessor identify a command device.
mounted on the disk adapter circuit board
(DKA) that controls the drives in a specific Application Management — The processes that
disk array. Considered part of the back end; manage the capacity and performance of
it controls data transfer between cache and applications.
the hard drives. ARB — Arbitration or request.
ACP Domain ― Also Array Domain. All of the ARM — Automated Restart Manager.
array-groups controlled by the same pair of Array Domain — Also ACP Domain. All
DKA boards, or the HDDs managed by 1 functions, paths and disk drives controlled
ACP PAIR (also called BED). by a single ACP pair. An array domain can
ACP PAIR ― Physical disk access control logic. contain a variety of LVI or LU
Each ACP consists of 2 DKA PCBs to configurations.
provide 8 loop paths to the real HDDs. Array Group — Also called a parity group. A
Actuator (arm) — Read/write heads are attached group of hard disk drives (HDDs) that form
to a single head actuator, or actuator arm, the basic unit of storage in a subsystem. All
that moves the heads around the platters. HDDs in a parity group must have the same
AD — Active Directory. physical capacity.
ADC — Accelerated Data Copy. Array Unit — A group of hard disk drives in 1
RAID structure. Same as parity group.
Address — A location of data, usually in main
memory or on a disk. A name or token that ASIC — Application specific integrated circuit.
identifies a network component. In local area ASSY — Assembly.
networks (LANs), for example, every node Asymmetric virtualization — See Out-of-Band
has a unique address. virtualization.
ADP — Adapter. Asynchronous — An I/O operation whose
ADS — Active Directory Service. initiator does not await its completion before
CLPR — Cache Logical Partition. Cache can be Cloud Fundamental —A core requirement to the
deployment of cloud computing. Cloud
divided into multiple virtual cache
memories to lessen I/O contention. fundamentals include:
RPC — Remote procedure call. SAN ― Storage Area Network. A network linking
computing devices to disk or tape arrays and
RPO — Recovery Point Objective. The point in other devices over Fibre Channel. It handles
time that recovered data should match. data at the block level.
RPSFAN — Rear Power Supply Fan Assembly. SAP — (1) System Assist Processor (for I/O
RRDS — Relative Record Data Set. processing), or (2) a German software
RS CON — RS232C/RS422 Interface Connector. company.
RSD — RAID Storage Division (of Hitachi). SAP HANA — High Performance Analytic
Appliance, a database appliance technology
R-SIM — Remote Service Information Message. proprietary to SAP.
RSM — Real Storage Manager. SARD — System Assurance Registration
RTM — Recovery Termination Manager. Document.
RTO — Recovery Time Objective. The length of SAS —Serial Attached SCSI.
time that can be tolerated between a disaster SATA — Serial ATA. Serial Advanced Technology
and recovery of data. Attachment is a new standard for connecting
R-VOL — Remote Volume. hard drives into computer systems. SATA is
R/W — Read/Write. based on serial signaling technology, unlike
current IDE (Integrated Drive Electronics)
-back to top-
hard drives that use parallel signaling.
—S— SBM — Solutions Business Manager.
SA — Storage Administrator. SBOD — Switched Bunch of Disks.
SA z/OS — System Automation for z/OS. SBSC — Smart Business Storage Cloud.
SAA — Share Access Authentication. The process SBX — Small Box (Small Form Factor).
of restricting a user's rights to a file system
SC — (1) Simplex connector. Fibre Channel
object by combining the security descriptors
connector that is larger than a Lucent
from both the file system object itself and the
connector (LC). (2) Single Cabinet.
share to which the user is connected.
SCM — Supply Chain Management.
SaaS — Software as a Service. A cloud computing
SCP — Secure Copy.
business model. SaaS is a software delivery
model in which software and its associated SCSI — Small Computer Systems Interface. A
data are hosted centrally in a cloud and are parallel bus architecture and a protocol for
typically accessed by users using a thin transmitting large data blocks up to a
client, such as a web browser via the distance of 15 to 25 meters.
Internet. SaaS has become a common SD — Software Division (of Hitachi).
WAN — Wide Area Network. A computing XFI — Standard interface for connecting a 10Gb
internetwork that covers a broad area or Ethernet MAC device to XFP interface.
region. Contrast with PAN, LAN and MAN.
XFP — "X"=10Gb Small Form Factor Pluggable.
WDIR — Directory Name Object.
XML — eXtensible Markup Language.
WDIR — Working Directory.
XRC — Extended Remote Copy.
WDS — Working Data Set. -back to top-
https://learningcenter.hds.com/Saba/Web/Main
Page E-1
Evaluating This Course
Page E-2