0% found this document useful (0 votes)
142 views

Managing Replication Solutions v8.x

This document is a student guide for Managing Replication Solutions v8.x that provides an overview of the following: - Hitachi Command Suite and Replication Manager, which provides centralized management of replication configurations across storage systems. - Initial setup of Replication Manager, including registering devices, configuring sites and resource groups to logically group replication configurations. - Hitachi replication products like TrueCopy and Universal Replicator that can be centrally managed using Replication Manager.

Uploaded by

vcosmin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
142 views

Managing Replication Solutions v8.x

This document is a student guide for Managing Replication Solutions v8.x that provides an overview of the following: - Hitachi Command Suite and Replication Manager, which provides centralized management of replication configurations across storage systems. - Initial setup of Replication Manager, including registering devices, configuring sites and resource groups to logically group replication configurations. - Hitachi replication products like TrueCopy and Universal Replicator that can be centrally managed using Replication Manager.

Uploaded by

vcosmin
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 282

Student Guide for

Managing Replication Solutions v8.x

TSI2564

Courseware Version 2.0


This training course is based on Hitachi Command Suite v8.1.3

Corporate Headquarters Regional Contact Information


2825 Lafayette Street Americas: +1 408 970 1000 or [email protected]
Santa Clara, California 95050-2639 USA Europe, Middle East and Africa: +44 (0) 1753 618000 or [email protected]
www.HDS.com Asia Pacific: +852 3189 7900 or [email protected]

© Hitachi Data Systems Corporation 2015. All rights reserved. HITACHI is a trademark or registered trademark of Hitachi, Ltd. Innovate With Information is a trademark or
registered trademark of Hitachi Data Systems Corporation. All other trademarks, service marks, and company names are properties of their respective owners.

ii
Contents
Introduction ........................................................................................................ ix
Welcome and Introductions .......................................................................................................................ix
Course Description ................................................................................................................................... x
Prerequisites ............................................................................................................................................ x
Course Objectives ....................................................................................................................................xi
Course Topics ..........................................................................................................................................xi
Learning Paths ........................................................................................................................................ xii
Resources: Product Documents ............................................................................................................... xiii
Collaborate and Share ............................................................................................................................ xiv
Social Networking — Academy’s Twitter Site .............................................................................................. xv

1. Hitachi Replication Manager Overview ......................................................... 1-1


Module Objectives ................................................................................................................................. 1-1
Customer Challenges ............................................................................................................................. 1-2
Centralized Enterprise-wide Replication Management................................................................................ 1-3
Replication Manager Overview ................................................................................................................ 1-4
Graphical User Interface ........................................................................................................................ 1-5
Centralized Monitoring ........................................................................................................................... 1-6
Storage Systems View ......................................................................................................................... 1-11
Features ............................................................................................................................................. 1-12
Positioning .......................................................................................................................................... 1-14
Architecture – Open Systems and Mainframe ......................................................................................... 1-16
Architecture – Open Systems with Application Agent .............................................................................. 1-18
Components ....................................................................................................................................... 1-19
Device Manager Agent ......................................................................................................................... 1-20
Instructor Demonstration - Hitachi Command Suite ................................................................................ 1-21
Module Summary ................................................................................................................................ 1-22
Module Review .................................................................................................................................... 1-22

2. Hitachi Replication Manager Initial Setup .................................................... 2-1


Module Objectives ................................................................................................................................. 2-1
Initial Setup .......................................................................................................................................... 2-2
Prerequisites ......................................................................................................................................... 2-3
Prerequisite Software............................................................................................................................. 2-3

iii
Contents
Configuring the Environment .................................................................................................................. 2-4
Launching Hitachi Command Suite .......................................................................................................... 2-4
Registering Information Sources ............................................................................................................. 2-6
Refreshing Configuration from Information Sources .................................................................................. 2-8
Information Refresh in Replication Manager ............................................................................................. 2-9
Refreshing Information from Pair Management Servers .......................................................................... 2-10
Users and Permissions ......................................................................................................................... 2-12
Managing Users and User Permissions ................................................................................................... 2-12
Adding Users and Assigning Permissions................................................................................................ 2-14
Managing Security ............................................................................................................................... 2-14
Sites .................................................................................................................................................. 2-15
Sites Overview .................................................................................................................................... 2-15
Example of Two Data Centers — Use Case ............................................................................................ 2-16
Site Example ....................................................................................................................................... 2-18
Site Properties .................................................................................................................................... 2-18
Setting Up Sites .................................................................................................................................. 2-19
Resource Groups ................................................................................................................................. 2-21
Resource Groups Overview................................................................................................................... 2-21
Sites and Resource Group Relationship .................................................................................................. 2-23
Example of Two Data Centers – Use Case ............................................................................................. 2-24
Resource Group Function ..................................................................................................................... 2-26
Resource Groups ................................................................................................................................. 2-27
Resource Group Properties ................................................................................................................... 2-31
Instructor Demonstration ..................................................................................................................... 2-32
Module Summary ................................................................................................................................ 2-33
Module Review .................................................................................................................................... 2-34

3. Hitachi Replication Products Overview......................................................... 3-1


Module Objectives ................................................................................................................................. 3-1
Hitachi Replication Program Products ...................................................................................................... 3-2
Hitachi Replication Products ................................................................................................................... 3-2
Tools Used for Setting Up Replication ...................................................................................................... 3-5
Requirements for All Replication Products ................................................................................................ 3-5
Basic Operations ................................................................................................................................... 3-6
Replication Operations ........................................................................................................................... 3-6
Copy Operations ................................................................................................................................... 3-7
Replication Operations ........................................................................................................................... 3-8

iv
Contents
Module Summary ................................................................................................................................ 3-12
Module Review .................................................................................................................................... 3-12

4. Hitachi ShadowImage Replication Operations with Replication Manager.... 4-1


Module Objectives ................................................................................................................................. 4-1
Licensing Considerations ........................................................................................................................ 4-2
ShadowImage In-System Replication Features ......................................................................................... 4-3
Key Features ......................................................................................................................................... 4-4
ShadowImage Commands ...................................................................................................................... 4-5
Paircreate ............................................................................................................................................. 4-6
Update ................................................................................................................................................. 4-9
Pairsplit .............................................................................................................................................. 4-10
Pairresync........................................................................................................................................... 4-13
Commands ......................................................................................................................................... 4-19
Launching ShadowImage Operations .................................................................................................... 4-27
Pair Configuration................................................................................................................................ 4-28
Checking Task Status........................................................................................................................... 4-37
Checking Pair Status ............................................................................................................................ 4-38
Changing Pair Status ........................................................................................................................... 4-40
Instructor Demonstration ..................................................................................................................... 4-43
Module Summary ................................................................................................................................ 4-44
Module Review .................................................................................................................................... 4-44

5. Hitachi Copy-on-Write Snapshot Operations with Replication Manager ...... 5-1


Module Objectives ................................................................................................................................. 5-1
Hitachi Replication Products ................................................................................................................... 5-2
Copy-on-Write Snapshot Overview .......................................................................................................... 5-4
Comparison .......................................................................................................................................... 5-5
Copy-on-Write Snapshot Operations........................................................................................................ 5-8
Setting Up Copy-on-Write .................................................................................................................... 5-11
Copy-on-Write Operations .................................................................................................................... 5-11
Setting Up the Data Pool ...................................................................................................................... 5-12
Creating Virtual Volumes ...................................................................................................................... 5-15
Managing Pairs ................................................................................................................................... 5-18
Instructor Demonstration ..................................................................................................................... 5-19
Module Summary ................................................................................................................................ 5-20
Module Review .................................................................................................................................... 5-20

v
Contents

6. Hitachi Thin Image Operations with Replication Manager ........................... 6-1


Module Objectives ................................................................................................................................. 6-1
What is Hitachi Thin Image?................................................................................................................... 6-2
Operations ............................................................................................................................................ 6-3
Thin Image Configuration ...................................................................................................................... 6-5
Comparison — Thin Image and ShadowImage ......................................................................................... 6-6
Comparison — Copy-on-Write and Thin Image ......................................................................................... 6-8
Specifications ........................................................................................................................................ 6-9
Thin Image Operations ........................................................................................................................ 6-12
Module Summary ................................................................................................................................ 6-13
Module Review .................................................................................................................................... 6-14

7. Hitachi TrueCopy Operations with Replication Manager .............................. 7-1


Module Objectives ................................................................................................................................. 7-1
Hitachi TrueCopy Benefits ...................................................................................................................... 7-2
Hitachi TrueCopy Synchronous Benefits ................................................................................................... 7-4
Remote Replication Solutions ................................................................................................................. 7-5
Internal Operations of Synchronous Replication ....................................................................................... 7-6
Remote Replication Configurations .......................................................................................................... 7-8
TrueCopy and ShadowImage Together.................................................................................................... 7-9
TrueCopy Specifications ....................................................................................................................... 7-11
TrueCopy Configuration Checklist.......................................................................................................... 7-11
Fibre Channel Links ............................................................................................................................. 7-12
Pair Operations ................................................................................................................................... 7-14
Differential Bitmap Function ................................................................................................................. 7-15
Advanced Pair Operations and Recovery Scenarios ................................................................................. 7-17
Hitachi Open Remote Copy (HORC) Takeover Support ............................................................................ 7-17
Pair Operations ................................................................................................................................... 7-23
TrueCopy Operations ........................................................................................................................... 7-25
Setting Up Remote Paths ..................................................................................................................... 7-26
Managing Pairs ................................................................................................................................... 7-29
Instructor Demonstration ..................................................................................................................... 7-32
Module Summary ................................................................................................................................ 7-33
Module Review .................................................................................................................................... 7-34

8. Hitachi Universal Replicator Operations with Replication Manager ............. 8-1


Module Objectives ................................................................................................................................. 8-1
Hitachi Universal Replicator Overview ...................................................................................................... 8-2

vi
Contents
Hitachi Universal Replicator Hardware ..................................................................................................... 8-4
Hitachi Universal Replicator Components ................................................................................................. 8-5
Hitachi Universal Replicator Specifications................................................................................................ 8-6
Hitachi Universal Replicator Usage .......................................................................................................... 8-7
Base Journal (Initial Copy) ..................................................................................................................... 8-8
Update Journal (Update Copy) ................................................................................................................ 8-9
Journal Restore ................................................................................................................................... 8-10
Hitachi Universal Replicator Configurations ............................................................................................ 8-10
Three Data Center Configuration .......................................................................................................... 8-11
Hitachi Universal Replicator Operations ................................................................................................. 8-13
Setting Up Remote Paths ..................................................................................................................... 8-14
Setting Up Journal Groups.................................................................................................................... 8-16
Managing Pairs ................................................................................................................................... 8-22
Demonstration .................................................................................................................................... 8-24
Module Summary ................................................................................................................................ 8-25
Module Review .................................................................................................................................... 8-26

9. Hitachi Replication Manager Monitoring Operations .................................... 9-1


Module Objectives ................................................................................................................................. 9-1
Monitoring Copy Operations ................................................................................................................... 9-2
Setting Up Alerts ................................................................................................................................... 9-6
Create Alert Setting Wizard .................................................................................................................... 9-8
Alert Status......................................................................................................................................... 9-11
Instructor Demonstration ..................................................................................................................... 9-13
Module Summary ................................................................................................................................ 9-14
Module Review .................................................................................................................................... 9-14

10. Application Replicas .................................................................................. 10-1


Module Objectives ............................................................................................................................... 10-1
Application Replicas ............................................................................................................................. 10-2
Backup and Restore Overview .............................................................................................................. 10-2
Application Backup and Restore Features .............................................................................................. 10-3
Components ....................................................................................................................................... 10-5
Application Agent ................................................................................................................................ 10-5
System Configuration for Remote Copy ................................................................................................. 10-6
Backup and Restore Operations ............................................................................................................ 10-6
Discovering Application Agent............................................................................................................... 10-7
Creating Replicas................................................................................................................................. 10-8

vii
Contents
Create Replica Wizard .......................................................................................................................... 10-9
Restoring Replicas ............................................................................................................................... 10-9
Restoring Replica ...............................................................................................................................10-10
Mounting or Unmounting Replica .........................................................................................................10-10
Module Summary ...............................................................................................................................10-11
Module Review ...................................................................................................................................10-11
Your Next Steps .................................................................................................................................10-12

Communicating in a Virtual Classroom: Tools and Features............................. V-1


Reminders: Intercall Call-Back Teleconference ......................................................................................... V-2
Synchronizing your Audio to the WebEx Session....................................................................................... V-2
Feedback Features – Try Them............................................................................................................... V-3
Intercall (WebEx) Technical Support ....................................................................................................... V-4
WebEx Hands-On Lab Operations ........................................................................................................... V-5

Glossary ............................................................................................................ G-1

Evaluate This Course ........................................................................................ E-1

viii
Introduction
Welcome and Introductions

 Participant introductions
• Name
• Position
• Experience
• Your expectations

ix
Introduction
Course Description

Course Description

Prerequisites

 Prerequisite Courses
• TSI2565 – Operating and Managing Hitachi Storage with Hitachi Command
Suite v8.x

 Other Prerequisites
• Experience working with servers (Windows or UNIX)
• Understanding of basic storage/SAN concepts

x
Introduction
Course Objectives

Course Objectives

 Upon completion of the course, you should be able to:


• Provide an overview of Hitachi Replication Manager
• Discuss features and functions of Hitachi replication products
• Describe the function and features offered by Hitachi In-System Heterogeneous
Replication bundle
• Perform in-system replication operations through Replication Manager software
• Describe the function and features offered by Hitachi Remote Replication software
(Hitachi TrueCopy and Hitachi Universal Replicator)
• Perform remote replication operations through Replication Manager software
• Describe the purpose of the Replication Manager Application Agent
• Describe the requirements for using the Application Agent

Course Topics

Modules Lab Activities


1. Hitachi Replication Manager Overview 1. Installing and Configuring Hitachi Replication
Manager (Optional)
2. Hitachi Replication Manager Initial Setup
3. Hitachi Replication Products Overview 2. Initial Setup

4. Hitachi ShadowImage Replication Operations with 3. Hitachi ShadowImage Replication Operations


Replication Manager 4. Hitachi Copy-On-Write Snapshot Operations
5. Hitachi Copy-On-Write Snapshot Operations with
Replication Manager
6. Hitachi Thin Image Operations with Replication 5. Hitachi Thin Image Copy Operations
Manager
7. Hitachi TrueCopy Operations with Replication 6. Hitachi TrueCopy Operations
Manager
8. Hitachi Universal Replicator Operations with 7. Hitachi Universal Replicator Operations
Replication Manager
9. Hitachi Replication Manager Monitoring Operations 8. Monitor Replication Operations
10. Application Replicas

xi
Introduction
Learning Paths

Learning Paths

 Are a path to professional


certification

 Enable career advancement

 Available on:
• HDS.com (for customers)
• Partner Xchange (for partners)
• theLoop (for employees)

Customers

Customer Learning Path (North America, Latin America, and APAC):


http://www.hds.com/assets/pdf/hitachi-data-systems-academy-customer-learning-paths.pdf

Customer Learning Path (EMEA): http://www.hds.com/assets/pdf/hitachi-data-systems-


academy-customer-training.pdf

Partners

https://portal.hds.com/index.php?option=com_hdspartner&task=displayWebPage&menuName=
PX_PT_PARTNER_EDUCATION&WT.ac=px_rm_ptedu

Employees

http://loop.hds.com/community/hds_academy

Please contact your local training administrator if you have any questions regarding Learning
Paths or visit your applicable website.

xii
Introduction
Resources: Product Documents

Resources: Product Documents

 Product documentation that provides


Set the filter to
detailed product information and Technical
Resources
future updates is now posted on
hds.com in addition to the Support
Portal

 There are 2 paths to these


documents:
• hds.com: Home > Corporate >
Resource Library

• Google Search

Resource Library

http://www.hds.com/corporate/resources/?WT.ac=us_inside_rm_reslib

Google Search

Two ways to do a Google search for Hitachi product documentation:

• Document name

• Any key words about the product you are looking for

o If the key words are covered in the product documents, Google will find it the
resource

o For example, if you search Google for System Modes Options for VSP G1000, it is
covered in the user guide so the document will come up on Google

xiii
Introduction
Collaborate and Share

Collaborate and Share

Hitachi Data Systems Community Academy in theLoop


 Learn best practices to optimize  Learn what’s new in the Academy
your IT environment  Ask the Academy a question
 Share your expertise with  Discover and share expertise
colleagues facing real challenges
 Shorten your time to mastery
 Connect and collaborate with
 Give your feedback
experts from peer companies
and HDS  Participate in forums

For Customers, Partners, Employees – Hitachi Data Systems Community:

https://community.hds.com/welcome

For Employees – theLoop:

http://loop.hds.com/community/hds_academy?view=overview

xiv
Introduction
Social Networking — Academy’s Twitter Site

Social Networking — Academy’s Twitter Site

 Twitter site
Site URL: http://www.twitter.com/HDSAcademy

Hitachi Data Systems Academy link to Twitter:

http://www.twitter.com/HDSAcademy

xv
Introduction
Social Networking — Academy’s Twitter Site

xvi
1. Hitachi Replication Manager Overview
Module Objectives

 Upon completion of this module, you should be able to:


• List the typical challenges related to replication
• Describe how Hitachi Replication Manager addresses the challenges
associated with replication
• List the key replication management features offered by replication
management
• Describe the architecture of Replication Manager for different
environments

Page 1-1
Hitachi Replication Manager Overview
Customer Challenges

Customer Challenges

 Some of the challenges that customers typically face when


managing replication are:
• Scalability and flexibility of copy features and configuration
• Knowledge and experience required for configuration
and operation
• Data consistency requirements by email servers and RDBMSs
• Visibility on comprehensive replication pairs and statuses
• Ever-growing storage of data to protect

RDBMS — Relational Database Management System

Scalability and flexibility copy features, and configuration


• Complexity without a quality management tool

Knowledge and experience required for configuration and operation

• Heavy reliance on the individual

• Large organizations can dedicate resources for replication management, but smaller
organizations depend on Information Technology (IT) resources, which requires training
those individuals and adds more cost

Data consistency requirements by RDBMSs

• VSS and VDI, cluster, and Volume Manager

Visibility on comprehensive replication pairs and statuses

• How many pairs, what location and what technology?

• What is their status? (Examples: PSUS, SMPL, PSUE, HOLD)

Ever growing storage capacity to protect

• Must increase and manage TBs per person

Page 1-2
Hitachi Replication Manager Overview
Centralized Enterprise-wide Replication Management

Centralized Enterprise-wide Replication Management

Hitachi Replication Manager

Configuration, Scripting, Task/Scheduler


Management and Reporting

Business
Universal
Copy-On-Write Thin Image ShadowImage TrueCopy Continuity
Replicator Manager
Primary Secondary
Provisioning Provisioning

CCI HORCM

Cross-product, cross-platform, GUI-based replication management

Replication Manager gives an enterprise-wide view of replication configuration, and allows


configuring and managing from a single location. Its primary focus is on integration and
usability.

For customers who leverage in-system or distance replication capabilities of their storage arrays,
Hitachi Replication Manager is the software tool that configures, monitors, and manages Hitachi
storage array-based replication products for both open systems and mainframe environment in
a way that simplifies and optimizes the:

• Configuration
• Operations
• Task management and automation
• Monitoring of the critical storage components of the replication infrastructure
Copy-on-Write = Hitachi Copy-on-Write Snapshot
Thin Image = Hitachi Thin Image
ShadowImage = Hitachi ShadowImage Heterogeneous Replication
TrueCopy = Hitachi TrueCopy Heterogeneous Remote Replication bundle
Universal Replicator = Hitachi Universal Replicator
Business Continuity Manager = Hitachi Business Continuity Manager
CCI = Command Control Interface
HORCM = Hitachi Open Remote Copy Manager (name of CCI executable)

Page 1-3
Hitachi Replication Manager Overview
Replication Manager Overview

Replication Manager Overview

 Replication Manager
• Configures, monitors, and manages Hitachi replication products for open systems
and mainframe environments
• Replication configuration management
 Enables users to set up all Hitachi replication products without requiring other
tools, for both local and remote storage systems
• Application aware backups
 MS SQL Server and Exchange
• Multiple user design and role-based user access control
 Achieves stringent access control for multiple users
• Task management
 Allows scheduling and automation of the configuration of replicated data volume
pairs

Hitachi Replication Manager configures, monitors and manages Hitachi replication products on
both local and remote storage systems. For both open systems and mainframe environments,
Replication Manager simplifies and optimizes the configuration and monitoring, operations, task
management and automation for critical storage components of the replication infrastructure.
Users benefit from a uniquely integrated tool that allows them to better control recovery point
objectives (RPOs) and recovery time objectives (RTOs).

Page 1-4
Hitachi Replication Manager Overview
Graphical User Interface

Graphical User Interface

 GUI is consistent with other HCS products


Global tasks bar area

Explorer menu

Dashboard menu

Object tree
Application area

Replication Manager provides a simple, easy-to-use, centralized management console for


monitoring and visualizing volume replication configurations and status information.

Replication Manager GUI is consistent with other Hitachi Command Suite products:

• Global tasks bar area contains menus and action buttons for Replication Manager
functions, and also contains information about the logged-in user.

• Explorer menu is the Replication Manager operations menu. This menu comprises
multiple drawers with options. When a menu option is chosen, the appropriate
information is displayed in the navigation area and the application area.

• Dashboard menu displays a list of Hitachi Command Suite products on the same
management server. You can launch products using the GO link

• Object tree is a tree view displayed in the navigation area. Expand the tree for object
selection.

• Application area displays information for the item selected in the Explorer menu or
object tree.

Page 1-5
Hitachi Replication Manager Overview
Centralized Monitoring

Centralized Monitoring

 Four views allow users to understand the replication environment


depending on the perspective:
• Hosts view
• Storage Systems view
• Pair Configurations view
• Applications view

Replication Manager provides the following four functional views that allow you to view pair
configurations and the status of the replication environment from different perspectives:

• Hosts view: This view lists open hosts and mainframe hosts and allows you to confirm
pair status summaries for each host.

• Storage Systems view: This view lists open and mainframe storage systems and
allows you to confirm pair status summarized for each. A storage system serving both
mainframe and open system pairs is recognized as two different resources to
differentiate open copy pairs and mainframe copy pairs.

• Pair Configurations view: This view lists open and mainframe hosts managing copy
pairs with CCI or BCM and allows you to confirm pair status summarized for each host.
This view also provides a tree structure along with the pair management structure.

• Applications view: This view lists the application and data protection status. This view
also provides a tree structure showing the servers and their associated objects (storage
groups, information stores, and mount points).

Page 1-6
Hitachi Replication Manager Overview
Centralized Monitoring

 Hosts view — Perspective of hosts using the pairs

 Storage Systems view — Perspective of storage systems


containing the pairs

Storage Systems view — This view lists open and mainframe storage systems and allows you
to confirm pair status summarized for each. A storage system serving both mainframe and open
system pairs is recognized as two different resources to differentiate open copy pairs and
mainframe copy pairs.

Page 1-7
Hitachi Replication Manager Overview
Centralized Monitoring

 Pair Configurations view — Perspective of hosts managing the


pairs

Pair Configurations view — This view lists open and mainframe hosts managing copy pairs
with CCI or BCM and allows you to confirm pair status summarized for each host. This view also
provides a tree structure along with the pair management structure.

 Applications view — Perspective of applications (MS-


Exchange/MS-SQL Server) being managed

Applications view — This view lists the application and data protection status. This view also
provides a tree structure showing the servers and their associated objects (storage groups,
information stores, and mount points).

Page 1-8
Hitachi Replication Manager Overview
Centralized Monitoring

 Provides a quick alert mechanism of potential problems using SNMP or email:


• Unexpected changes in copy status
• Exceeded user-defined thresholds
 Resource utilization (Journals/ThinImage pool)
 Recovery point objective (RPO)
of target copy group

Replication Manager can send an alert when a monitored target, such as a copy pair or buffer,
satisfies a preset condition. The conditions that can be set include:

• Thresholds for copy pair statuses

• Performance information

• Copy license usage

You can specify a maximum of 1,000 conditions.

Alert notification is useful for enabling a quick response to a hardware failure or for determining
the cause of a degradation in transfer performance. Alert notifications are also useful for
preventing errors due to buffer overflow and insufficient copy licenses, thereby facilitating the
continuity of normal operation. Because you can receive alerts by email or SNMP traps, you can
also monitor the replication environment while you are logged out of Replication Manager.

Page 1-9
Hitachi Replication Manager Overview
Centralized Monitoring

 Exporting Replication Manager management information


• Determine cause of error

• Analyze performance information

 Write delay time (C/T delta) on


a copy group basis

 Journal volume usage on a copy


group basis

 Journal volume usage on a


journal group basis (in open
systems)

 History of received alerts

 Event logs

 Pool volume usage on a pool basis


(in open systems)

You can export Replication Manager management information to a file in CSV or HTML format.
Using the exported file, you can determine the cause of an error, establish corrective measures,
and analyze performance information. If necessary, you can edit the file or open it with another
application program. You can export a maximum of 20,000 data items at a time.

The following performance information items can be exported:

• Write delay time (C/T delta) on a copy group basis

• Journal volume usage on a copy group basis

• Journal volume usage on a journal group basis (in open systems)

• Pool volume usage on a pool basis (in open systems)

• The history of received alerts

• Event logs

When you export management information, you can specify a time period to limit the amount
of information that will be exported. However, you can export only information whose data
retention period has not yet expired. The retention period can be managed by a user with the
Admin (Replication Manager management) permission.

Page 1-10
Hitachi Replication Manager Overview
Storage Systems View

Storage Systems View

 Additional information available on the tabs

The storage systems view provides information about LUNs (paired and unpaired), journal
groups, copy licenses, command devices and pools.

LUNs (paired) tab shows the list of LDEVs that are already configured as copy pairs.

• Clicking on a specific LUN provides detailed information about the copy pair, copy type,
pair status, and much more.

• A filter dialog is available for LUNs tab, which makes it easier to find target volumes.
You can filter LUNs by using attributes such as port, HSD, logical group, capacity, label
and copy type.

The Cmd Devs tab displays the command devices list configured on the storage systems.

The Pools tab displays detailed information for both Copy-on-Write, ThinImage and Dynamic
Provisioning pools.

The JNLGs tab displays a list of journal groups that are configured on the storage system.

The Remote Path tab displays the remote paths configured for TrueCopy and Universal
Replicator software.

The Copy Licenses tab displays the replication-related licenses that are installed on the
storage systems.

You can also manage (create, edit, delete) resources using the above tabs. Copy licenses for
program products need to be installed through the element manager for the storage system.

Page 1-11
Hitachi Replication Manager Overview
Features

Features

 “Single pane of glass”


• Integrated console for multi-site replication pairs
• Consolidated monitoring for Copy Pair status and remote copy metrics

 Visual representation of replication structure


• Copy groups, sites, all volume pairs

Visual Representation of Replication Structure

Copy groups: A group of copy pairs created for management purposes, as required by a
particular task or job. By specifying a copy group, you can perform operations such as changing
the pair status of multiple copy pairs at once. Using the My Copy Groups feature, a user can
register a copy group into My Copy Groups, choosing only those that are most important to
monitor to see how copy groups are related and check copy pair statuses in a single window.
My Copy Groups is also the default screen after you log in to the Replication Manager interface.

Sites: With Replication Manager, you can define logical sites in the GUI just as you would define
actual physical sites (actual data centers). It allows you to manage resources more efficiently if
you set up separate sites because it is easier to locate a required resource among many
resources displayed in the GUI.

Page 1-12
Hitachi Replication Manager Overview
Features

 Pair volume lifecycle management


• Simplified replication configuration from setup to deletion
 Setup > Definition > Creation (Initial copy) > Operation > Monitoring > Alerting > Deletion

 Storage system configuration functions


• Set up functionality required for copy pair management
 Setting command devices, DMLU, journal groups and pools
 Setting up remote paths for remote replication

 Copy pair creation or deletion


• Pair configuration wizard
 Intuitive pair definition screen with topological view
• Task scheduler
 Scheduler functionality that allows users to execute the copy operations at off-peak time

Page 1-13
Hitachi Replication Manager Overview
Positioning

Positioning

Replication
Monitoring and
Replication Manager Management

Configuration
Open Volumes

Navigator
Device Manager Management

Storage

Volumes
BC

M/F
Manager1
RAID Manager Replication
Management

Modular Storage Enterprise Storage


SI
TC SI
UR Replication
CoW
CoW Technologies
TCMD TC HTI
1 Optional
Remote In-System Remote In-System

Replication Manager provides monitoring for both enterprise storage systems (open and
mainframe volumes) and modular storage systems (open volumes)

Replication Manager requires, and is dependent on Hitachi Device Manager and uses RAID
Manager (CCI) and Device Manager agent for monitoring open volumes

• Device Manager provides volume configuration management

• RAID Manager (CCI) is used by Replication Manager for watching pair status

For monitoring mainframe volumes, Replication Manager can work with or without Hitachi
Business Continuity Manager (BCM) software or Mainframe Agent.

• Replication Manager supports monitoring of IBM environments (z/OS, z/VM, z/VSE and
z/Linux) and non-IBM environments using only the Device Manager (without Business
Continuity Manager or Mainframe Agent installed). Replication Manager retrieves the
status of TCS/TCA/SI, and UR copy pairs directly from storage arrays, without
depending on mainframe host types. The minimum interval of automatic refresh for this
configuration is 30 minutes.

Diagram legend:

• TC stands for Hitachi TrueCopy

• TCE stands for Hitachi TrueCopy Extended Distance

• SI stands for Hitachi ShadowImage Heterogeneous Replication

Page 1-14
Hitachi Replication Manager Overview
Positioning

• UR stands for Hitachi Universal Replicator

• CoW stands for Hitachi Copy-on-Write Snapshot

• HTI stands for Hitachi Thin Image

• RAID Manager stands for Hitachi Command Control Interface (CCI)

Page 1-15
Hitachi Replication Manager Overview
Architecture – Open Systems and Mainframe

Architecture – Open Systems and Mainframe

 Standard configuration of a site


Management Client Pair Mgmt Server (CCI Srv)
Modular Storage

Host Agent
HRpM Agent

Agent Base

Manager
SNM2

Common
Plug-in

RAID

(CCI)
CMD
Browser HDvM Agent Device
Plug-in

IP Network Host (Production Server)


Management

Host Agent
Server HRpM Agent

Agent Base

Manager
Common
Plug-in

RAID

(CCI)
HRpM Server
HDvM Agent

SAN
FC-
Plug-in
HDvM Server

HBase (*) Host (Production Server)


(No Agent and CCI) Enterprise
Storage
S/N SVP
Host (Mainframe, z/OS) CMD
Device
HTTP Server
BCM

Standard system configuration of a site is comprised of:

• Management Server: Replication Manager gets installed with Device Manager. HBase
is automatically installed by the Device Manager installation. It is highly recommended
to use the same version number, major and minor, for Device Manager server and
Replication Manager server.

• Pair Management Server (Open Systems)

o Host Agent: Only a single host agent is provided for the Device Manager and
Replication Manager. One agent install on the server works for Device Manager
and Replication Manager.

o RAID Manager (CCI): The Replication Manager requires RAID Manager to


manage replication pair volumes. The servers on which the RAID Manager is
installed must have a host agent so that Replication Manager can recognize and
manage the pair volume instances.

• Pair Management Server (Mainframes)

• Business Continuity Manager: This software product works on the mainframe and
manages replication pair volumes assigned for the mainframe computers. The
Replication Manager can monitor and manage the mainframe replication volumes by
communicating with the Business Continuity Manager.

Page 1-16
Hitachi Replication Manager Overview
Architecture – Open Systems and Mainframe

• Host (Production Server): A host runs application programs. The installation of


Device Manager agent is optional. Replication Manager can acquire the host information
(host name, IP address, and mount point) if the agent is installed on it.

o IBM HTTP Server is required on Mainframe Host when using either of the
following:

 IPv6 connection between HRpM and BCM.

 HTTPS (secure) connection between HRpM and BCM.

o BCM program itself does not have above capabilities, so the IBM HTTP Server is
utilized to perform these functions. IBM HTTP Server works as “proxy server”
between HRpM and BCM.

Diagram legend:

• HDvM stands for Hitachi Device Manager

• HRpM stands for Hitachi Replication Manager

• BCM stands for Business Continuity Manager

• HBase stands for Hitachi Command Suite Common Component Base

Page 1-17
Hitachi Replication Manager Overview
Architecture – Open Systems with Application Agent

Architecture
g – Open Systems with Application Agent
 Standard configuration of a site
Management Client Host (Application Server)

Host Agent
HRpM Agent

Agent Base

Manager
Common
Plug-in

RAID

(CCI)
Browser HDvM Agent Modular Storage
Plug-in
SNM2
CMD
Application Device

IP Network
Management Agent
Server
HRpM Server Host (Application

SAN
FC-
Backup/Import Server)
HDvM Server

Host Agent
Enterprise

Agent Base
HRpM Agent

Common

Manager
Plug-in Storage

RAID

(CCI)
HBase (*)
HDvM Agent S/N SVP
Plug-in
CMD
Device
Application
Agent

Application Server –
MS-Exchange / MS-SQL

Note: Depending on the configuration, backup servers are not required for SQL Server
configurations.

* HBASE – Represents the common components for HCS.

Page 1-18
Hitachi Replication Manager Overview
Components

Components

 Replication Manager is composed of:


• Management server
 Device Manager*
 Replication Manager
• Management client
 Web client
• Pair management server (open systems)
 Device Manager agent
 RAID Manager (CCI)
• Pair management server (mainframe)
 Business Continuity Manager or Mainframe Agent
• Host (application server)
• Application agent
* One HRpM server can manage and monitor volumes from multiple HDvM servers

Management Server: A management server provides management information in response to requests


from management clients. Device Manager is a prerequisite software for Replication Manager. Replication
Manager and Device Manager are installed on the same management server. If multiple sites are used, a
management server is required for each site. Also, the management server at the remote site can be
used to manage pairs when the local site management server fails.

Management Client: A management client runs on a web browser and provides access to the instance
of Replication Manager.

Pair Management Server (open systems/mainframes): A pair management server collects


management information, including copy pair statuses and performance information for remote copying.
If multiple sites are used, at least one Pair Management server is required for each site. More than one
Pair Management server can be setup at each site.

• A pair management server can also be a host (application server).

o CCI and a Device Manager agent are installed on each pair management server for open
systems

o Business Continuity Manager or Mainframe Agent is installed on each pair management


server for mainframes

Note: When determining whether to set up pair management servers to be independent of hosts,
consider security and the workloads on the hosts.

Host (Application Server): Application programs are installed on a host. A host can be used as a pair
management server, if required. The Device Manager agent is optional if the server is used as a host
(and not pair management server).

Page 1-19
Hitachi Replication Manager Overview
Device Manager Agent

Device Manager Agent

 Device Manager agent is a program that runs on a host to collect host and
storage system information, and reports that data to the Device Manager server.
It collects:
• Host machine information, such as host names, IP addresses, Host bus
adapter (HBA) worldwide name (WWN), and iSCSI name
• Information about LDEVs allocated to the host, such as LDEV number,
storage system, logical unit number (LUN), and LDEV type
• Information about file systems allocated to the host, such as file system types,
mount points, and usage
• Copy pair information, such as pair types and statuses
 Replication Manager management server uses this information for displaying and
managing the pair information

LDEV – logical device

 Device Manager agent is the common agent for Device Manager and
Replication Manager

 Download the agent installer from the Device Manager web client

 Install the agent using an operating system account with


administrator or root permissions

 To operate the CCI instances running on the Device Manager agent,


the service permissions must be changed from LocalSystem to an
operating system user with administrator permissions
Note: Refer to the HCS Installation and Configuration Guide.

CCI – Command Control Interface

Page 1-20
Hitachi Replication Manager Overview
Instructor Demonstration - Hitachi Command Suite

Instructor Demonstration - Hitachi Command Suite


 Hitachi Command Suite

– Installation

– Register Storage in HDvM

Instructor Demonstration
- Hitachi Command Suite

Page 1-21
Hitachi Replication Manager Overview
Module Summary

Module Summary

 In this module, you have learned how to:


• List the typical challenges related to replication
• Describe how Hitachi Replication Manager addresses the challenges
associated with replication
• List the key replication management features offered by replication
management
• Describe the architecture of Replication Manager for different
environments

Module Review

1. What are the key features of Hitachi Replication Manager?


2. List the components of Hitachi Replication Manager configuration.

3. What role does Device Manager agent play in Replication Manager


operations?

Page 1-22
2. Hitachi Replication Manager Initial Setup
Module Objectives

 Upon completion of this module, you should be able to:


• List the prerequisites for Hitachi Replication Manager
• Perform an initial setup
• Describe the purpose of using sites
• Create a site and use the site function
• Describe the purpose of using resource groups
• Create and use resource groups

Page 2-1
Hitachi Replication Manager Initial Setup
Initial Setup

Initial Setup

 Prerequisites validation

 Configure environment

 Set up users and permissions

 Create resource groups

 Organize resources – sites

Page 2-2
Hitachi Replication Manager Initial Setup
Prerequisites

Prerequisites

Prerequisite Software

 Before starting Hitachi Replication Manager operations, confirm that:


• The pair management servers are set up with:
 Device Manager agent
 Command Control Interface (CCI)
 Command device
• These resources are added to Device Manager:
 Storage systems
 Hosts (Device Manager agent optional)
 Pair management servers (Device Manager agent required)
• License keys for replication products, Device Manager and Replication Manager are installed
• Microcode versions are at recommended levels, as required for the program products

Configure Hitachi Device Manager: After installing Device Manager, add to Device Manager
the storage systems, hosts, and pair management servers to be managed in Replication
Manager.

Note: HDvM supports agent-less discovery of hosts using the host data collector. The agent-less
discovery is used for reporting host information and does not support replication operations.

For performing replication operations using Replication Manager, a pair management server
must be set up with HDvM agent, CCI and Command Device.

Page 2-3
Hitachi Replication Manager Initial Setup
Configuring the Environment

Configuring the Environment

Launching Hitachi Command Suite

http://<HCS server IP address>:22015/ReplicationManager/


or
http://<HCS server hostname>:22015/ReplicationManager/

In the Web browser address bar, enter the URL for the management server where Replication
Manager is installed. The user login window appears. When you log in to Replication Manager
for the first time, you must use the built-in default user account and then specify Replication
Manager user settings. The user ID and password of the built-in default user account are as
follows:

• User ID: system

• Password: manager (default)

If Replication Manager user settings have already been specified, you can use the user ID and
password of a registered user to log in. If you enabled authentication using an external
authentication server, use the password registered in that server.

Page 2-4
Hitachi Replication Manager Initial Setup
Launching Hitachi Command Suite

 Launch from Hitachi Command Suite main window

Hitachi Replication Manager can also be launched from the HCS main window Tools menu
option.

Page 2-5
Hitachi Replication Manager Initial Setup
Registering Information Sources

Registering Information Sources

 Information sources provide environment


configuration to Replication Manager

 Possible information sources:


• Device Manager server
• Application agent (MS-SQL and MS-
Exchange)
• BC Manager and Mainframe Agent

 The Device Manager server on which


Replication Manager is installed is
registered as information
source automatically

Before you can use Replication Manager to manage resources, you must register an information
source. In open systems, this information source is the Device Manager server. In mainframe
systems, this information source is either Business Continuity Manager or Mainframe Agent.
Once the information sources are registered, you can view host information, information about
the connected storage systems, and copy pair configuration information as Replication Manager
resources. You can register a maximum of 100 information sources.

Page 2-6
Hitachi Replication Manager Initial Setup
Registering Information Sources

 Adding a new information source

Local Device Manager server will automatically become Information Source. If you would like to
add more servers, then ensure that you have the following Device Manager server information:

• IP address or host name

• Protocol to be used for communication with Replication Manager (HTTP or HTTPS)

• Port number (the server.http.port value in the server.properties file for the Device
Manager server)

• User ID and password where you can log in to the Device Manager server

Page 2-7
Hitachi Replication Manager Initial Setup
Refreshing Configuration from Information Sources

Refreshing Configuration from Information Sources

 Refresh configuration to collect the latest information from the


information sources into Replication Manager

Replication Manager repository gets synchronized with the local Device Manager server
automatically. Any addition of a new Information Source should be followed by a Refresh
Configuration.

Note: From HRpM, configuration information managed by local instances of Device Manager is
automatically applied to Replication Manager. It is no longer necessary to refresh the
configuration for local instances of Device Manager.

Page 2-8
Hitachi Replication Manager Initial Setup
Information Refresh in Replication Manager

Information Refresh in Replication Manager

 You can configure Replication Manager to refresh information:


• Manually at any time in a sub-window
• Automatically

 Set up automatic refresh of configuration information using 2 types of


settings:
• Automatic updating in conjunction with storage systems refresh operations
• Periodic updating based on a specified refresh interval

 Information may be updated often if both settings are enabled


• We recommend that if you want to enable the periodic update setting, disable
the automatic update setting

The following information is automatically refreshed every five minutes, regardless of the
refresh settings for configuration information:

• Journal group status

• Journal volume usage on a journal group basis

• Pool status

• Pool volume usage on a pool basis

Page 2-9
Hitachi Replication Manager Initial Setup
Refreshing Information from Pair Management Servers

Refreshing Information from Pair Management Servers


 Refresh setting globally for pair management and Device Manager
server

Refresh Interval Settings for Agent

Specify the copy pair status refresh interval for the pair management server that belongs to the
information source. If you change the pair status refresh interval settings in this item, the new
settings replace the settings made for each pair management server in the Edit Interval of
Refresh Pair Status - pair-management-server-name dialog box.

Refresh Interval Settings for Device Manager

Specify the copy pair status refresh interval by refreshing Device Manager when monitoring
copy pairs that are not managed by the pair management server.

Page 2-10
Hitachi Replication Manager Initial Setup
Refreshing Information from Pair Management Servers

 Refresh settings for individual pair management server

The information-source-name (Device Manager) sub-window lets you view the pair status
refresh interval for pair management servers managed by the Device Manager server.

Page 2-11
Hitachi Replication Manager Initial Setup
Users and Permissions

Users and Permissions

Managing Users and User Permissions

 Hitachi Replication Manager implements access control in two ways:


• User permissions restrict the operations that users can perform
• Resource groups restrict the range of resources that specific users can
access

 Permissions associated with each management role


Management Role Permission Description

Permits the user to log in, use all Command Suite products, and set up
User Management Admin*
other users.

Replication Manager Permits the user to set up Replication Manager resources and the
Management accessible ranges (resource groups) for all users. This role also enables
Admin
the user to perform all administrative tasks within the resource groups
except specifying user settings.

Enables the user to manage the resources in resource groups set up by


Modify users who have the admin permission of the Replication Manager
management role.

Enables the user to view the resources in resource groups set up by


View users who have the admin permission of the Replication Manager
Management role.

* By default, users who have the Admin permission of the User Management role cannot
perform any Replication Manager operations other than user management. To perform these
operations, such users must be granted the Replication Manager management permissions.

Page 2-12
Hitachi Replication Manager Initial Setup
Managing Users and User Permissions

 All users can set up personal profiles and Replication Manager licenses
regardless of their permissions
 The built-in user ID System lets you manage all users in Hitachi Command Suite
 You cannot change or delete this user ID or its permissions

Peer account is internally used by an agent.

Page 2-13
Hitachi Replication Manager Initial Setup
Adding Users and Assigning Permissions

Adding Users and Assigning Permissions

1. From the Explorer menu, choose Administration and then Users and Permissions.
2. Expand the object tree, and then select Users.
3. Click Add User. The Add User dialog box appears.
4. Enter the user details and then click OK.

Managing Security

 Replication Manager provides the following security functions:


• Sets password policy to prevent users from specifying easy-to-guess passwords
• Enables automatic locking of user accounts if successive login attempts fail
• Displays a warning banner in the user login window

Page 2-14
Hitachi Replication Manager Initial Setup
Sites

Sites

Sites Overview

 Sites provide a grouping functionality of resources for easier


management on Replication Manager GUI.
• In a complex replication environment, storage systems might be located at
many sites. You can group resources to create logical sites whenever
necessary
• Grouping resources based on the actual sites simplifies resource
management because you can then use a GUI for management
• It provides the same functionality as the existing resources drawer with
logical structure
• Users can define sites and register associated resources

 Sites are managed by users with the Admin (Replication Manager


Management) permission

 Sites consist of hosts, storage systems, applications, and copy pair


configuration definitions (pair management servers)

 A resource can belong to only one site

Notes:
• You can specify hosts, storage systems, application and copy pair configuration
definitions (pair management servers) for any site. Although you can specify more than
one resource for each site, you cannot specify a particular resource for more than one
site.

• With Replication Manager, you can use the GUI to define logical sites just as you would
define actual physical sites (actual data centers). If you set up separate sites, you can
manage resources more efficiently because the GUI makes it easy to locate a required
resource among the many resources displayed.

Page 2-15
Hitachi Replication Manager Initial Setup
Example of Two Data Centers — Use Case

Example of Two Data Centers — Use Case

Primary Site Remote Site


Local PM Pair Management Servers Remote PM
Server Server

DB DB Backup
Command
Server Server Command
Device Device
Subsystem1 Subsystem3
Remote Copy (Universal Replicator)

Mail Mail Backup


Command Server Server Command
Device
Device
Subsystem2 Subsystem4
Remote Copy (TrueCopy)

Page 2-16
Hitachi Replication Manager Initial Setup
Example of Two Data Centers – Use Case

Example of Two Data Centers – Use Case

 Site configuration
• Create a primary site and remote site and place each server based on its
physical location
 Place local PM
server, DB server,
and mail server into
primary site
 Place remote PM
server, DB backup
server and mail
backup server into
remote site

In the diagram and following pages, CMD stands for Command Device.

 Objective
• Easily perform the pair management operation on the Site menu with
structured resources
• Easily find the target
volumes or pair
management servers
with site structure
 Pair configuration
wizard provides the
filtering function with
site

Page 2-17
Hitachi Replication Manager Initial Setup
Site Example

Site Example

 Example of sites

Site Properties

 Site structure is shared by all users


• Only the admin user can configure the site
• All the users can see the resources under the sites (within the resources
controlled by resource group)

 One resource belongs to only one site


• Exclusive registration where only one storage system can belong to one site

 There is no hierarchical structure

Page 2-18
Hitachi Replication Manager Initial Setup
Setting Up Sites

Setting Up Sites

 Adding a site

1. In the Explorer menu, click the Shared Views drawer to select the Sites option.

2. Click Add Site to display the Add Site dialog box.

3. Enter the site name in the Name field and then click OK.

 Sites – Adding hosts

Page 2-19
Hitachi Replication Manager Initial Setup
Setting Up Sites

 Sites – Adding storage system

 Adding pair configurations and applications (hosts)

Page 2-20
Hitachi Replication Manager Initial Setup
Resource Groups

Resource Groups

Resource Groups Overview

 Provide access control functionality

 A collection of hosts, storage systems and applications grouped by


purpose and associated with a user for controlled access by the user

 Large environments require security management for resources such as


controlling who can access this storage system. An administrator is
assigned to hosts and systems that are grouped by a site or department

Rules for setting up resource groups:

• Multiple resources can be registered in each resource group, but each resource can be
registered in only one resource group.

• A user can be granted access permissions for multiple resource groups (that is, the user
can be associated with more than one resource group).

• The default group All Resources cannot be deleted or renamed. A new resource group
named All Resources cannot be added.

• All resources are automatically registered in the All Resources group.

• Because a user logged in with the built-in account, System (the built-in account) is
permitted to access all resources; the user is automatically registered in the All
Resources group.

• Any user can be added to the All Resources group if they do not belong to another
resource group.

• Except for users logged in as System, users with the Admin (user management)
permission can belong to resource groups only when they also have the Admin, Modify,
or View (Replication Manager management) permission.

Page 2-21
Hitachi Replication Manager Initial Setup
Resource Groups Overview

 Types of resource groups:


• All resources: System-defined, containing all the
resources in the storage system
• User-defined: Users with administrative
privileges can define a resource group and add
resources, such as hosts and storage systems

 Users can only see the allocated resources


on the GUI

 A user can be associated with multiple


resource groups to increase the range of
operations

Page 2-22
Hitachi Replication Manager Initial Setup
Sites and Resource Group Relationship

Sites and Resource Group Relationship

 Use the GUI to define logical sites just


as you would define actual physical
sites (actual data centers)

 Users can view the resources that


belong to the sites in the resource
groups with which the users have been
associated

Page 2-23
Hitachi Replication Manager Initial Setup
Example of Two Data Centers – Use Case

Example of Two Data Centers – Use Case

This is the same use case presented for Sites. This shows how to create resource groups and
how users are given control to particular resources in Primary Site and Remote Site so that they
can execute the volume copy operations.

 Resource group configuration Users - Sys Admin


- DB Admin
• All resources are automatically registered in - Mail Admin
All resources group Primary Site Remote Site
PM Server Local PM
• The default group All Resources Resource Server
Remote
PM Server
cannot be deleted or renamed Group
DB
DB Backup
DB CMD CMD
• Sys admin belongs to the Resource Subsystem1
Server Server
Subsystem3
default All Resources group Group Remote Copy (UR)

Mail
• Assign mail admin to Mail Mail Backup
Server
mail resource group
CMD
Resource Server CMD

Group Subsystem2 Subsystem4


Remote Copy
• Assign DB admin (TrueCopy)
to DB resource group

In this diagram and the following slides, UR stands for Hitachi Universal Replicator software.

Page 2-24
Hitachi Replication Manager Initial Setup
Example of Two Data Centers – Use Case

 Objective
• Prevent the malicious activity or operational
Users - Sys Admin
errors by dividing the access scope - DB Admin
- Mail Admin
 Mail admin can
Primary Site Remote Site
monitor the copy PM Server
Local PM Remote PM
pairs within the Resource
Group Server Pair Management
Server
Servers
assigned resource
DB Backup
group DB
Resource
CMD DB Server
Server
CMD

Subsystem3
Group Subsystem1
Remote Copy (UR)

Mail
Mail Mail Backup
CMD
Resource Server
Server CMD

Group Subsystem2 Subsystem4


Remote Copy
(TrueCopy)

In the diagram:

• The Primary site contains: local PM server, DB server, mail server, subsystem1
and subsystem2.

• The Remote site contains: remote PM server, DB backup server, mail backup
server, subsystem3 and subsystem4.

• There are three resource groups: PM server resource group, DB resource group,
and mail resource group.

• The user, Sys Admin, belongs to the default All Resources group, therefore has
access to all resources on the primary site and remote site and can manage all
copy pairs.

• The user, DB Admin, belongs to DB resource group, therefore has access only
to the DB server, DB backup server, subsystem1 and subsystem3.

• The user, Mail Admin, belongs to mail resource group, therefore has access only
to the mail server, mail backup server, subsystem2 and subsystem4.

Page 2-25
Hitachi Replication Manager Initial Setup
Resource Group Function

Resource Group Function

 Resource groups allow the addition of resources (hosts, storage


systems and applications) to a user-defined group RGs and assign user
access control

 Create users and assign permissions before adding resources to


resource groups and assigning users for user control
2. Assign Permissions Management Roles
User Management
Replication Manager
1. Create Users Software Management

Types of Resource Groups


3. Create Resource Groups
All Resources

User Defined
Hosts Subsystems

1. Create users.

2. Assign permissions to the users based on whether they will be managing Replication
Manager or they will also be creating other users.

3. Create resource groups – All Resource group is the default.

4. Add host and storage systems to user-defined resource group.

5. Assign users to user-defined resource group for accessibility control.

Page 2-26
Hitachi Replication Manager Initial Setup
Resource Groups

Resource Groups

 Create resource group

1. In the Explorer menu, click the Administration drawer and then select the Resource
Groups option.

2. Click Create Group to display Create Resource Group dialog box.

3. Enter a name in the Name field and then click OK.

Page 2-27
Hitachi Replication Manager Initial Setup
Resource Groups

 Add hosts

Assign Hosts

To assign hosts to a user-defined resource group:

1. Select the user-defined resource group in the Navigation area.

2. Click the Hosts tab in the Application area.

3. Click Add Hosts on the bottom-right of the Application area. The Add Hosts dialog box
appears.

4. Select the host’s check boxes to add those hosts and then click OK.

Page 2-28
Hitachi Replication Manager Initial Setup
Resource Groups

 Add storage systems and applications

Add Resources

To add a storage system to a user-defined resource group:

1. Select the user-defined resource group in the Navigation area.

2. Click the Storage Systems tab in the Application area.

3. Click Add Storage Systems on the bottom-right of Application area.

4. The Add Storage Systems dialog box appears.

5. Select the storage system check box to add that system and then click OK.

Page 2-29
Hitachi Replication Manager Initial Setup
Resource Groups

 Add users

Assign Users

To assign users to a user-defined resource group:

1. Select the user-defined resource group in the Navigation area.

2. Click the Users tab in the Application area.

3. Click Add Users on the bottom-right of Application area. The Add Users dialog box
appears.

4. Select the user’s checkboxes to add those users and then click OK.

Page 2-30
Hitachi Replication Manager Initial Setup
Resource Group Properties

Resource Group Properties

 Multiple resources can be registered in each resource group, but each


resource can be registered to only one resource group (exclusive
registration)

 Newly created users do not belong to any resource group

 Users can be granted access permissions for multiple resource groups

 All resources are automatically registered in the All Resources group

 The default group, All Resources, cannot be deleted or renamed. A new


resource group named All Resources cannot be added

 The built-in admin account, system, is automatically registered in the All


Resources group

 There is no hierarchical structure

Page 2-31
Hitachi Replication Manager Initial Setup
Instructor Demonstration

Instructor Demonstration

 Initial setup
• Users and permissions
• Refresh settings
• Sites
• Resource groups
Instructor
Demonstration

Page 2-32
Hitachi Replication Manager Initial Setup
Module Summary

Module Summary

 In this module, you should have learned to:


• List the prerequisites for Hitachi Replication Manager
• Perform an initial setup
• Describe the purpose of using sites
• Create a site and use the site function
• Describe the purpose of using resource groups
• Create and use resource groups

Page 2-33
Hitachi Replication Manager Initial Setup
Module Review

Module Review

1. List the initial setup tasks.


2. What information is registered for a site and a resource group?

3. What is the difference between sites and resource groups?

Page 2-34
3. Hitachi Replication Products Overview
Module Objectives

 Upon completion of this module, you should be able to:


• Identify the various replication program products available
• List the benefits and features of in-system and remote replication products
• List the basic replication operations for managing replication pairs

Page 3-1
Hitachi Replication Products Overview
Hitachi Replication Program Products

Hitachi Replication Program Products

Hitachi Replication Products

In-System Replication Remote Replication

Hitachi ShadowImage Replication Hitachi TrueCopy Heterogeneous Remote


Replication bundle

Hitachi Copy-On-Write Snapshot Hitachi Universal Replicator


Hitachi Thin Image

 Hitachi ShadowImage Replication


• Features
Production Copy of
 Full physical copy of a volume at a point in time Production
Volume
 Immediately available for concurrent use by other applications
(P-VOL) Volume
 No host processing cycles required (S-VOL)
 No dependence on operating system, file system or database Normal Point-in-
processing time copy
 All copies are additionally RAID protected continues for parallel
 Up to 9 copies for a source volume (enterprise storage) unaffected processing

• Benefits
 Protects data availability
 Simplifies and increases disaster recovery testing
 Eliminates the backup window
 Reduces testing and development cycles
 Enables nondisruptive sharing of critical information

In the diagram and following slides:

• P-VOL stands for primary volume.

• S-VOL stands for secondary volume.

Page 3-2
Hitachi Replication Products Overview
Hitachi Replication Products

 Hitachi Copy-on-Write Snapshot


• Features Primary Host Secondary Host
 Provides nondisruptive, volume “snapshots”
 Uses less space than ShadowImage
 Allows multiple frequent, cost-effective, point-in-time copies
Read Write Read Write
 Immediate read/write access to virtual copy
 Nearly instant restore from any copy

• Benefits P-VOL

 Protects data availability with rapid restore Differential Virtual Volume


POOL
Data Save 10:00 am 11:00 am 12:00 pm

 Simplifies and increases disaster recovery testing


 Eliminates the backup window
 Reduces testing and development cycles
 Enables nondisruptive sharing of critical information

 Hitachi Thin Image


• Features
 Creates instant copies of data for backup or application testing purposes
 Saves up to 90% or more disk space by storing only changed data blocks
Read Write
 Rapidly create up to 1,024 snapshots
 Allows multiple frequent, cost-effective, point-in-time copies P - VOL

 Immediate read/write access to virtual copy Only Changed


 Nearly instant restore from any copy Data Saved

• Benefits Pool
 Protects data availability with rapid restore
 Simplifies and increases disaster recovery testing
 Eliminates the backup window
 Reduces testing and development cycles V - VOL V - VOL V - VOL

 Enables nondisruptive sharing of critical information


Virtual Volumes

Page 3-3
Hitachi Replication Products Overview
Hitachi Replication Products

 Hitachi TrueCopy Remote Replication


• Features • Benefits
 Synchronous support
 Complete data protection solution up to 300km
 Support for mainframe and open environments (~190miles) enables more frequent disaster recovery
 The remote copy is always a “mirror” image testing

 Provides fast recovery with no data loss  Improves customer service by reducing downtime of
 Installed in the highest profile DR sites around the customer-facing applications
world
 Increases the availability of revenue-producing
applications

 Improves competitiveness by distributing time-critical


information anywhere and anytime

P-VOL S-VOL

 Hitachi Universal Replicator


• Benefits
• Features
• Resource optimization
 Asynchronous replication • Mitigation of network problems and significantly reduced network
costs
 Leverages Hitachi Virtual Storage Platform
• Enhanced disaster recovery capabilities through 3 Data Center
 Performance-optimized disk-based journaling solutions
• Reduced costs due to “single pane of glass” heterogeneous
 Resource-optimized processes replication

 Advanced 3 Data Center capabilities

 Mainframe and open systems support

The following describes the basic technology behind the disk-optimized journals.

• I/O is initiated by the application and sent to the storage system.


• It is captured in cache and sent to the disk journal, at which point it is written to disk.
• The “I/O complete” message is released to the application.
• The remote system pulls the data and writes it to its own journals and then to the
replicated application volumes.
Universal Replicator sorts the I/Os at the remote site by sequence and time stamp (mainframe)
and guarantees data integrity. Note that Universal Replicator offers full support for consistency
groups through the journal mechanism (journal groups).

Page 3-4
Hitachi Replication Products Overview
Tools Used for Setting Up Replication

Tools Used for Setting Up Replication

 Graphical user interface


• Hitachi Storage Navigator program
 Storage centric

• Hitachi Device Manager


 Data center view of resources, limited or no monitoring options, primary focus is provisioning

• Hitachi Replication Manager


 Geographically spread data center and site views, enhanced monitoring and alerting features, primary focus is
replication

 Command line interface


• Used to script replication process

• RAID Manager/CCI
 HORCM configuration files
 Command device

Requirements for All Replication Products

 Any volumes involved in replication operations (source and destination) should be:
• Same size (in blocks)

• Must be mapped to a FED (CHA) port


 Source can be online and in use
 Destination must not be in use/mounted

• Same emulation type (Open-3 with Open-3, Open-V with Open-V)

• Intermix of RAID levels and drive type is supported

• Capacity based licensing:


 You must purchase the license and make sure that there is enough licensed capacity according to the capacity
of pairs you are going to create
 Source, destination, and reserved volumes require licenses for volume capacity and management
 For thin-provisioned (HDP) volumes, only 'consumed' storage from the pool needs replication license. (This can
be less than total virtual volume size.)

Note: If the source is a LUSE volume, then the destination must be an identical LUSE volume
with the same size and structure. This will reduce the number of copy pairs possible.

Page 3-5
Hitachi Replication Products Overview
Basic Operations

Basic Operations

Replication Operations

 Basic operations when working with replication products:


• Paircreate, Pairsplit, Pairresync

 Commands are consistent across products (in-system or remote


replication), but implementation varies depending on the product
• In-system: All operations with LDEVs pairings within the same frame
• Remote: All operations with LDEVs pairings across frames
• Use manual to identify product-specific operations with above commands

 A volume that has source data is called a Primary Volume (P-VOL) a


volume to which the data is copied is called a Secondary Volume
(S-VOL)

 The pairing of P-VOL and S-VOL is called a copy pair

A copy pair is a pair of volumes linked by the storage system's volume replication functionality
(such as ShadowImage and TrueCopy). Copy pairs are also called paired volumes.

The following types of volumes make up a copy pair:

• Primary volume (P-VOL): The copy source volume.

• Secondary volume (S-VOL): The destination volume to which the contents of the
primary volume are copied.

Page 3-6
Hitachi Replication Products Overview
Copy Operations

Copy Operations

 Data copy operations


• Initial copy
 Results in all data being copied from P-VOL to S-VOL
 Copies everything including empty blocks
• Update copy
 Only differentials are copied

Page 3-7
Hitachi Replication Products Overview
Replication Operations

Replication Operations

 paircreate
• Select a volume and issue paircreate
• Initial copy (full copy) takes place. Track by track copy of P-VOL to S-VOL
happens regardless of the amount of data in P-VOL
• Volume status changes from SMPL to PAIR
P-VOL S-VOL
Time
VOL Status Host Access VOL Status Host Access
frame

Before SMPL R/W SMPL R/W OR R

During COPY(PD) R/W COPY(PD) R

After PAIR R/W PAIR R

The paircreate command generates a new volume pair from two unpaired volumes. The
paircreate command can create either a paired logical volume or a group of paired volumes.

When issuing paircreate, you can select the pace for the initial copy operation. The pace can
be specified in terms of number of tracks to copy at a time (1-15). Less number of tracks
minimizes the impact of software operations on system I/O performance, while higher number
of tracks completes the initial copy operation as quickly as possible. The best timing is based on
the amount of write activity on the P-VOL and the amount of time elapsed between update
copies.

Simplex (SMPL) status of a volume indicates that the volume is not used in any replication
operation.

Page 3-8
Hitachi Replication Products Overview
Replication Operations

 pairsplit
• Update copy takes place to flush all pending changes
• Volume status changes to PSUS
• S-VOL is the Point-in-Time (PiT) copy and now available to applications for
read/write
• Differential bitmaps track changes to P-VOL and S-VOL while pair is split
P-VOL S-VOL
Time
VOL Status Host Access VOL Status Host Access
frame

Before PAIR R/W PAIR R

During COPY(SP) R/W COPY(SP) R

After PSUS R/W SSUS R/W

The pairsplit command stops updates to the secondary volume of a pair and can either
maintain (status = PSUS) or delete (status = SMPL) the pairing status of the volumes. It can be
applied to a paired logical volume or a group of paired volumes. The pairsplit command allows
read access or read/write access to the secondary volume, depending on the selected options.

Page 3-9
Hitachi Replication Products Overview
Replication Operations

 pairresync
• S-VOL is no longer available to host
• S-VOL and P-VOL differential bitmaps are merged
• Changed tracks are marked and written from P-VOL to S-VOL
• Volume status changes to pair
P-VOL S-VOL
Time
VOL Status Host Access VOL Status Host Access
frame

Before PSUS R/W SSUS R/W

During COPY(RS) R/W COPY(RS) R

After PAIR R/W PAIR R

Replication software allows you to perform pairresync operations on split and suspended pairs:

• Pairresync for split pair – When a pairresync operation is performed on a split pair
(status = PSUS), the system merges the S-VOL track map into the P-VOL track map and
then copies all flagged tracks from the P-VOL to the S-VOL. This also greatly reduces the
time needed to resynchronize the pair.

• Pairresync for suspended pair – When a pairresync operation is performed on a


suspended pair (status = PSUE), the storage system copies all data on the P-VOL to the
S-VOL, since all P-VOL tracks were flagged as difference data when the pair was
suspended. The pairresync operation for suspended pairs is equivalent to and takes as
long as the ShadowImage initial copy operation.

Page 3-10
Hitachi Replication Products Overview
Replication Operations

 pairsplit –S (delete)
• Delete the pair and stop replication operations for the pair
• Immediate access to S-VOL
 No update copy (pending changes are lost, ignored)
• Changes volume status back to simplex
P-VOL S-VOL
Time
VOL Status Host Access VOL Status Host Access
frame

Before PAIR R/W PAIR R

After SMPL R/W SMPL R/W

The pairsplit -S operation (delete pair) stops the copy operations to the S-VOL of the pair and
changes the pair status of both volumes to SMPL.

When a pair is deleted, the pending update copy operations for the pair are discarded, and the
status of the P-VOL and S-VOL is changed to SMPL.

Page 3-11
Hitachi Replication Products Overview
Module Summary

Module Summary

 In this module, you should have learned to:


• Identify the various replication program products available
• List the benefits and features of in-system and remote replication products
• List the basic replication operations for managing replication pairs

Module Review

1. List the in-system and remote replication products.


2. What is the difference between initial copy and update copy?
3. List the basic operations for replication.

Page 3-12
4. Hitachi ShadowImage Replication
Operations with Replication Manager
Module Objectives

 Upon completion of this module, you should be able to:


• Describe licensing considerations for Hitachi ShadowImage Replication
• Describe the key features
• Perform ShadowImage operations through Replication Manager

Page 4-1
Hitachi ShadowImage Replication Operations with Replication Manager
Licensing Considerations

Licensing Considerations

 A license key is required


• A separate license key is required for each storage system
• Licenses are purchased in capacity increments

 License capacity is required for ShadowImage primary volumes,


secondary volumes and reserved volumes

 Additional license capacity is required for P-VOLs and pool volumes that
are used by Hitachi Copy-on-Write Snapshot

If Hitachi Dynamic Provisioning volumes are used as P-VOLs or S-VOLs on enterprise storage:

• The capacity of the pool used by the Dynamic Provisioning volume will affect the license
capacity.

• Include the Dynamic Provisioning pool capacity when determining the ShadowImage
license capacity.

• If the amount of data exceeds the license capacity, you can use the volumes for an
additional 30 days. Once 30 days have passed, you cannot do any operations except
suspending or deleting pairs.

Page 4-2
Hitachi ShadowImage Replication Operations with Replication Manager
ShadowImage In-System Replication Features

ShadowImage In-System Replication Features

 ShadowImage Replication
 Full physical copy of a volume at a
point in time
Production Copy of
 Immediately available for concurrent Volume Production
use by other applications (P-VOL) Volume
(S-VOL)
 No host processing cycles required
 No dependence on operating system, Normal Point-in-time
Processing Copy for
file system, or database continues parallel
unaffected processing
 All copies are additionally RAID
protected
 Up to 9 copies for a source volume
(enterprise storage)

Hitachi ShadowImage In-System Replication software bundle is a nondisruptive, host-


independent data replication solution for creating copies of any customer-accessible data within
a single Hitachi storage system. ShadowImage also increases the availability of revenue-
producing applications by enabling backups to run concurrently with production.

ShadowImage copies:

• Are RAID protected to ensure the highest data availability.

• Can provide immediate access and sharing of information for decision support, testing
and development.

• Offers nearly instant recovery from data corruption (disk-based copies).

Page 4-3
Hitachi ShadowImage Replication Operations with Replication Manager
Key Features

Key Features

 Replicates information within Hitachi enterprise storage systems without disrupting


operations

 Once copied, data can be used for:


• Data warehousing and data mining applications
• Backup and recovery
• Application development

 Supports the creation of up to 9 system-protected copies from each source volume


(enterprise storage)

 High performance achieved through asynchronous copy facility to secondary volumes

 Up to 128 concurrent copies

 Restrictions
• The following volumes cannot be used for creating pairs
 Hitachi Universal Replicator journal volumes
 Virtual volumes (except Dynamic Provisioning volumes)
 Copy-on-Write pool volumes
 Network attached storage (NAS) system volumes cannot be
S-VOLs
 Any data retention volume set as “S-VOL DISABLE”

Data Retention Utility allows you to assign the S-VOL Disable attribute. This could be used for
production volumes to protect them from accidental overwriting due to a copy operation.

Page 4-4
Hitachi ShadowImage Replication Operations with Replication Manager
ShadowImage Commands

ShadowImage Commands

 Paircreate: Establishes a pair


 Pairsplit: Splits so that S-VOL can be accessed by an application
 Pairresync: Re-establishes the pair
• Normal: P-VOL changed tracks are sent to S-VOL, PAIR status
• Quick: PAIR status occurs while P-VOL changed tracks are being sent to
S-VOL
• Restore: Sends S-VOL changes to P-VOL. Caution!
• Quick Restore: Changes P-VOL and S-VOL logical device (LDEV) mapping.
Use with Swap and Freeze function
 Suspend (pairsplit –E): Forces a split to Error Status. No
updates are sent
 Delete (pairsplit –S): Deletes a pair. No updates are sent

When issuing paircreate, you can select the pace for the initial copy operation:

• Slower

• Medium

• Faster

The slower pace minimizes the impact of operations on system I/O performance, while the
faster pace completes the initial copy operation as quickly as possible. The best timing is based
on the amount of write activity on the P-VOL and the amount of time elapsed between update
copies.

Page 4-5
Hitachi ShadowImage Replication Operations with Replication Manager
Paircreate

Paircreate

 Initial copy operation

SMPL Start
P-VOL S-VOL

COPY (PD)
P-VOL S-VOL Initial
Copy

PAIR Finished
P-VOL S-VOL

Initial Copy operation takes place when you create a new volume pair. The Initial Copy
operation copies all data on the P-VOL to the associated S-VOL. The P-VOL remains available to
all hosts for read and write I/Os throughout the Initial Copy operation. Write operations
performed on the P-VOL during the Initial Copy operation will be duplicated at the S-VOL by
Update Copy operations after the initial copy is complete. The status of the pair is COPY(PD)
(PD = pending) while the Initial Copy operation is in progress. The status changes to PAIR
when the initial copy is complete.

 Initial copy rules


• Two LDEVs that compose a paired volume must be the same emulation type
and size
• Supports LUSE volumes (same structure required), Virtual Logical LUN (VLL)
and Cache Resident volumes
• Neither RAID levels nor HDD types have to match

Definitions:

• VLL stands for Virtual Logical LUN, the method used to create custom volume sizes.

• LUSE – Logical Unit that consists of multiple LDEVs.

Page 4-6
Hitachi ShadowImage Replication Operations with Replication Manager
Paircreate

Asynchronous Write Asynchronous Write

Cascade Connection SVOL


SVOL

Max. 2 SVOL SVOL

SVOL
WriteData
Write
Write Data
Data PVOL SVOL Total 9 Copies:
SVOL Three Level 1’s
Max. 2 SVOL And
Six Level 2’s
SVOL
SVOL
Max. 2 SVOL SVOL

PVOL Level 1 SVOL Level 2 SVOL

Hitachi ShadowImage Heterogeneous Replication enables you to maintain system-internal


copies of all user data for purposes such as data backup or duplication. The RAID protected
duplicate volumes are created within the same system as the primary volume at hardware
speeds. ShadowImage Heterogeneous Replication is used for UNIX-based and PC server data.
It can provide up to 9 (enterprise storage) duplicates of one primary volume for UNIX-based
and PC server data only.

Hitachi ShadowImage for Mainframe protects mainframe data in the same manner. For
mainframes, ShadowImage Heterogeneous Replication can provide up to 3 duplicates of 1
primary volume.

In Storage Navigator the paircreate command creates the first Level 1 “S” volume. The set
command can be used to create the Level 1 “S” volumes. And the cascade command can be
used to create the Level 2 “S” volumes off the Level 1 “S” volumes.

Page 4-7
Hitachi ShadowImage Replication Operations with Replication Manager
Paircreate

 Differential bitmap function


• Differential bitmaps are maintained in shared/control memory for each
ShadowImage data volume
• Differential bitmaps designate changes to primary data volumes during Initial
Copy and to both primary and secondary volumes while pairs are split
• When data volumes are resynchronized, differential bitmaps for the two
volumes are merged and all changed tracks are sent from primary to
secondary volumes (normal resync)
• Differential bitmap is also known as Changed Track Mapping

The differential data (updated by write I/Os during split or suspension) between the primary
data volume and the secondary data volume is stored in each track bitmap. When a split or
suspended pair is resumed (pairresync), the primary storage system merges the primary data
volume and secondary data volume bitmaps, and the differential data is copied to the
secondary data volume.

Page 4-8
Hitachi ShadowImage Replication Operations with Replication Manager
Update

Update
 Update copy operations
Host I/O
Differential Data

P-VOL S-VOL

Update Copy
PAIR
P-VOL S-VOL

The Update Copy operation sends changed data to the S-VOL of a pair after the Initial Copy operation is
complete. Update Copy operations take place only for duplex pairs (status = PAIR).

As write I/Os are performed on a duplex P-VOL, the system stores a map of the P-VOL differential data,
and then performs Update Copy operations periodically based on the amount of differential data present
on the P-VOL, as well as the elapsed time between Update Copy operations.

 Asynchronous access to secondary volumes – No impact to host I/O


(1) Write I/O
Cache
Cache Memory
Server Data
(2) Write complete
(3) Asynchronous write at the best timing

DKA Pair DKA Pair


Fast response to host side, and
intelligent asynchronous copy S-VOL
P-VOL
• System replies “write complete” to host as soon as
the data is written to cache memory Max. 9 S-VOL S-VOL

• Data on cache memory is asynchronously written


to P-VOL and S-VOL at the best timing S-VOL

The Update Copy operations are not performed for pairs with the following status:

• COPY (PD) (pending duplex)


• COPY (SP) (steady split pending)
• PSUS (SP) (quick split pending)
• PSUS (split)
• COPY (RS) (resync)
• COPY (RS-R) (resync-reverse)
• PSUE (suspended)

Page 4-9
Hitachi ShadowImage Replication Operations with Replication Manager
Pairsplit

Pairsplit

10:00 AM. Status = PAIR


 Steady split illustration Dirty Tracks

HOST I O
3, 10, 15,18

P-VOL Asynchronous Updates S-VOL

10:00:01 AM. Pairsplit (Steady)

Dirty Tracks
Tracks 3, 10 15, and 18 sent from P-VOL

HOST I O
3, 10, 15,18
to S-VOL

P-VOL Updates --> S-VOL

10:00:55 AM. Status = PSUS

Dirty Tracks Dirty Tracks

HOST I O

HOST I O
P-VOL S-VOL

1. The P-VOL and S-VOL are in PAIR status as of 10:00 AM. P-VOL Tracks 3, 10, 15 and 18
are marked as dirty because of Host I/O.

2. At 10:00:01 AM, a pairsplit (Steady) command is issued. Tracks 3, 10, 15 and 18 are
sent across to the S-VOL from the P-VOL.

3. When the update operation in step 2 is complete, the status of the P-VOL and S-VOL is
changed to PSUS. During this state, there are track bitmaps attached to both the P-VOL
and the S-VOL. These bitmaps keep track of changes on both the P-VOL and the S-VOL.

Page 4-10
Hitachi ShadowImage Replication Operations with Replication Manager
Pairsplit

10:00 AM. Status = PAIR


 Quick split illustration Dirty Tracks

HOST I O
3, 10, 15,18

P-VOL Asynchronous Updates S-VOL

10:00:01 AM. Pairsplit (Quick)

Dirty Tracks
Status Immediatly
Dirty Tracks

HOST I O

HOST I O
changes to PSUS

P-VOL S-VOL

10:00:01 AM. Status = PSUS

Dirty Tracks Tracks 3, 10 15, and 18 sent


Dirty Tracks

HOST I O

HOST I O
from P-VOL to S-VOL in the
background

P-VOL Updates --> S-VOL

1. The P-VOL and S-VOL are in PAIR status as of 10:00 AM. P-VOL Tracks 3, 10, 15 and 18
are marked as dirty because of host I/O.

2. The status of the P-VOL and the S-VOL is changed instantly to PSUS and the S-VOL is
immediately available for reads and writes.

3. Tracks 3, 10, 15 and 18 are sent across to the S-VOL from the P-VOL in the background.

o If during this Update Copy operation there is any I/O to tracks 3, 10, 15, or 18
on the S-VOL, then the system fetches the data from the P-VOL.

o During the PSUS state, there are track bitmaps attached to both the P-VOL and
the S-VOL. These bitmaps keep track of changes on both the P-VOL and the S-
VOL.

Page 4-11
Hitachi ShadowImage Replication Operations with Replication Manager
Pairsplit

 Differential bitmap function


• While pairs are split, both P-VOL and S-VOL differential bitmaps may be
active
• Changed tracks on P-VOL
PVOL | X | | X | | | | X | | X | | | | | | | | . . . .

SVOL | X | X | | | | | | | | | | | X | | | . . . .

• S-VOL mounted with Write Enabled

This diagram illustrates the usage of differential bitmaps after a pair is suspended. While
suspended, updates can occur to P-VOL. Changes can also occur on S-VOL if it is mounted with
Write Enabled. The bitmaps denote any changed tracks while the pair is suspended.

Page 4-12
Hitachi ShadowImage Replication Operations with Replication Manager
Pairresync

Pairresync

10:00 AM. Status = PSUS


 Normal resync illustration Dirty Tracks Dirty Tracks

HOST I O

HOST I O
10,15,18,29 10,19, 23

P-VOL S-VOL

10:00:01 AM. Pairresync (Normal)

Dirty Tracks Tracks

HOST I O
10,15,18,19,23,29
sent from P-VOL to
S-VOL
P-VOL Updates --> S-VOL

10:00:45 AM. Status = PAIR

Dirty Tracks

HOST I O
P-VOL Asynchronous Updates S-VOL

1. The status of the P-VOL and the S-VOL is PSUS as of 10:00 AM. P-VOL Tracks 10, 15, 18
and 29 are marked as dirty.

2. Tracks 10, 19 and 23 are marked as dirty on the track bitmap for the S-VOL.

3. At 10:00 AM, a pairresync (Normal) command is issued. The track bitmaps for the P-
VOL and S-VOL are merged. The resulting track bitmap has tracks 10, 15, 18, 19, 23
and 29 marked as dirty. These tracks are sent from the P-VOL to the S-VOL as part of
an Update Copy operation.

4. When the Update Copy operation in step 2 is complete, the P-VOL and S-VOL are
declared as a PAIR.

Page 4-13
Hitachi ShadowImage Replication Operations with Replication Manager
Pairresync

10:00 AM. Status = PSUS


 Quick resync illustration Dirty Tracks Dirty Tracks

HOST I O

HOST I O
10,15,18,29 10,19, 23

P-VOL S-VOL

10:00:01 AM. Pairresync (Quick)

Dirty Tracks
Status changes to

HOST I O
PAIR Immediatly

P-VOL Asynchronous Updates S-VOL

10:00:45 AM. Status =PAIR


Tracks
Dirty Tracks 10,15,18,19,23,29

HOST I O
sent from P-VOL to
S-VOL in the
background
P-VOL Updates --> S-VOL

1. The status of the P-VOL and the S-VOL is PSUS as of 10:00 AM. Tracks 10, 15, 18 and
29 are marked as dirty on the track bitmap for the P-VOL. Tracks 10, 19 and 23 are
marked as dirty on the track bitmap for the S-VOL.

2. At 10:00 AM, a pairresync (normal) command is issued. The status of the P-VOL and
the S-VOL changes instantly to PAIR.

3. The track bitmaps for the P-VOL and S-VOL are merged. The resulting track bitmap has
tracks 10, 15, 18, 19, 23 and 29 marked as dirty. These tracks are sent from the P-VOL
to the S-VOL as part of a Update Copy operation in the background.

 Quick resync
• Command completed in less than one sec/pair
• Copies only delta bitmap
• Delta data will be copied during PAIR status
• Command by RAID Manager
No host
Read/Write Read/Write Read/Write access
Delta bitmap Delta bitmap

Quick Resync Asynchronous


PVOL Copy SVOL
PVOL SVOL Request
Status: PAIR
Status: PSUS
Delta data is copied

Page 4-14
Hitachi ShadowImage Replication Operations with Replication Manager
Pairresync

 Differential bitmap function


• Changes that have occurred while pair is split
P-VOL |X| |X| | | |X| |X| | | | | | | |. . . .

S-VOL |X| | |X| | | | | | | |X| | | | | |. . .

• Merge
P-VOL |X| |X|X| | |X| |X| |X| | | | |. . . .

• P-VOL differential bitmap after ‘OR’ operation

This diagram illustrates the use of differential bitmaps, after a pair is resynchronized.

While suspended, updates may occur to both primary and secondary volumes. The bitmaps
denote any changed tracks while the pair is suspended.

When the resync command is issued, the S-VOL differential bitmaps are merged into the P-
VOL differential bitmaps. Then all of the changed tracks are copied from the P-VOL to the S-
VOL. This process results in overwrites for any changed S-VOL data.

For a resync restore operation, P-VOL bitmaps are merged into the S-VOL bitmaps and all
changed tracks are written from the S-VOL to the P-VOL, thus overwriting production data.

Universal Replicator, TrueCopy Synchronous/Asynchronous, and ShadowImage Heterogeneous


Replication all use differential bitmaps in exactly the same way.

Page 4-15
Hitachi ShadowImage Replication Operations with Replication Manager
Pairresync

 Resync restore illustration

(Restore)

1. The status of the P-VOL and the S-VOL is PSUS as of 10:00 AM. Tracks 10, 15, 18 and
29 are marked as dirty on the track bitmap for the P-VOL. Tracks 10, 19 and 23 are
marked as dirty on the track bitmap for the S-VOL.

2. At 10:00 AM, a pairresync (restore) command is issued. The track bitmaps for the P-
VOL and S-VOL are merged. The resulting track bitmap has tracks 10, 15, 18, 19, 23
and 29 marked as dirty. These tracks are sent from the S-VOL to the P-VOL as part of
an Update Copy operation.

3. When the Update Copy operation in step 2 is complete, the P-VOL and S-VOL are
declared as a PAIR.

Page 4-16
Hitachi ShadowImage Replication Operations with Replication Manager
Pairresync

10:00 AM. Status = PSUS


 Quick restore illustration Dirty Tracks Dirty Tracks

HOST I O

HOST I O
10,15,18,29 10,19, 23
LDEV 2:03 (RAID
Group 1-1)
LDEV 1:04 (RAID
P-VOL Group 2-3) S-VOL

10:00:01AM.Pairresync (Quick Restore)

Dirty Tracks

LDEV locations
are Swapped
P-VOL S-VOL

10:00:03 AM. Status = PAIR

Dirty Tracks

HOST I O
LDEV 2:03 (RAID LDEV 1:04 (RAID
Group 2-3) Group 1-1)

P-VOL Asynchronous Updates S-VOL

1. The status of the P-VOL and the S-VOL is PSUS as of 10:00 AM. Tracks 10, 15, 18 and
29 are marked as dirty on the track bitmap for the P-VOL. Tracks 10, 19 and 23 are
marked as dirty on the track bitmap for the S-VOL. The P-VOL LDEV ID is 2:03 and the
RAID Group that the PVOL belongs to is 1-1. The S-VOL LDEV ID is 1:04 and the RAID
Group that the S-VOL belongs to is 2-3.

2. At 10:00:01, a Quick Restore command is issued. The LDEV locations are swapped so
that the P-VOL now belongs to RAID Group 2-3 and the S-VOL now belongs to RAID
Group 1-1.

3. At 10:00:03, after the SWAP operation is complete the P-VOL and S-VOL are declared as
a PAIR.

4. If you want to swap back, just do a new pairresync restore (quick) at a later point in
time.

Page 4-17
Hitachi ShadowImage Replication Operations with Replication Manager
Pairresync

 Quick restore
• Extremely fast recovery of P-VOL from an S-VOL

 Quick resync
• Rapid resynchronization of an S-VOL from a P-VOL

 Quick split
• Ability to rapidly suspend mirroring operation and provide availability to an
S-VOL

Page 4-18
Hitachi ShadowImage Replication Operations with Replication Manager
Commands

Commands
 paircreate
• Select a volume and issue paircreate
• Initial copy takes place
• Volume status changes to PAIR
P-VOL S-VOL

Time Frame VOL Status Host Access VOL Status Host Access

Before SMPL R/W SMPL R/W OR R

During COPY(PD) R/W COPY(PD) R

After PAIR R/W PAIR R

The paircreate command generates a new volume pair from two unpaired volumes. It can
create either a paired logical volume or a group of paired volumes.

 pairsplit – Steady
• Update copy takes place
• Volume status changes to PSUS
• S-VOL is now available
• Differential bitmaps track changes to P-VOL and S-VOL while pair is split
P-VOL S-VOL

Time Frame VOL Status Host Access VOL Status Host Access

Before PAIR R/W PAIR R

During COPY(SP) R/W COPY(SP) R

After PSUS R/W SSUS R/W

The pairsplit command stops updates to the secondary volume of a pair and can either
maintain (status = PSUS) or delete (status = SMPL) the pairing status of the volumes. It can be
applied to a paired logical volume or a group of paired volumes. The pairsplit command allows
read access or read/write access to the secondary volume, depending on the selected options.
You can create and split ShadowImage pairs simultaneously using the -split option of the
paircreate command.

Page 4-19
Hitachi ShadowImage Replication Operations with Replication Manager
Commands

 pairsplit – Quick
• Volume status changes to PSUS
• Update copy takes place in background
• S-VOL is available instantly
• Differential bitmaps track changes to P-VOL and S-VOL while pair is split
P-VOL S-VOL

Time Frame VOL Status Host Access VOL Status Host Access

Before PAIR R/W PAIR R

During COPY(SP) R/W SSUS R/W

After PSUS R/W SSUS R/W

The pairsplit Quick operation speeds up the normal pairsplit operation by changing
the pair status to PSUS first and copying the data in the background.

Page 4-20
Hitachi ShadowImage Replication Operations with Replication Manager
Commands

 pairresync – Normal
• The data on S-VOL becomes no longer available
• S-VOL and P-VOL differential bitmaps are merged
• Changed tracks are marked and written from P-VOL to S-VOL
• Volume status changes to PAIR
P-VOL S-VOL

Time Frame VOL Status Host Access VOL Status Host Access

Before PSUS R/W SSUS R/W

During COPY(RS) R/W COPY(RS) R

After PAIR R/W PAIR R

ShadowImage Heterogeneous Replication software allows you to perform normal/quick


pairresync operations on split and suspended pairs, but reverse/quick and restore pairresync
operations can only be performed on split pairs:

• pairresync for split pair – When a normal/quick pairresync operation is performed on


a split pair (status = PSUS), the system merges the S-VOL track map into the P-VOL
track map and then copies all flagged tracks from the P-VOL to the S-VOL. When a
reverse or quick restore pairresync operation is performed on a split pair, the system
merges the P-VOL track map into the S-VOL track map and then copies all flagged
tracks from the S-VOL to the P-VOL. This ensures that the P-VOL and S-VOL are
properly resynchronized in the desired direction. This also greatly reduces the time
needed to resynchronize the pair.

• pairresync for suspended pair – When a normal/quick pairresync operation is


performed on a suspended pair (status = PSUE), the subsystem copies all data on the P-
VOL to the S-VOL, since all P-VOL tracks were flagged as difference data when the pair
was suspended. Reverse and quick restore pairresync operations cannot be performed
on suspended pairs. The normal pairresync operation for suspended pairs is equivalent
to and takes as long as the ShadowImage Initial Copy operation.

Page 4-21
Hitachi ShadowImage Replication Operations with Replication Manager
Commands

 pairresync – Quick
• S-VOL is no longer available to host
• Volume status changes to PAIR
• S-VOL and P-VOL differential bitmaps are merged
• Changed tracks are marked and written from P-VOL to S-VOL in the
background
P-VOL S-VOL

Time Frame VOL Status Host Access VOL Status Host Access

Before PSUS R/W SSUS R/W

During PAIR R/W COPY(RS) R

After PAIR R/W PAIR R

Page 4-22
Hitachi ShadowImage Replication Operations with Replication Manager
Commands

 pairresync – Restore
• S-VOL is no longer available to host
• S-VOL and P-VOL differential bitmaps are merged
• Changed tracks are marked and written from S-VOL to P-VOL
• Volume status changes to PAIR
 Can only be done from L1 to P-VOL
P-VOL S-VOL

Time frame VOL Status Host Access VOL Status Host Access

Before PSUS R/W SSUS R/W

During COPY(RS-R) NA COPY(RS-R) R

After PAIR R/W PAIR R

The restore pairresync operation synchronizes the P-VOL with the S-VOL. The copy direction
for a reverse pairresync operation is S-VOL to P-VOL.

The pair status during a restore resync operation is COPY(RS-R), and the P-VOL and S-VOL
become inaccessible to all hosts for write operations. As soon as the reverse pairresync
operation is complete, the P-VOL becomes accessible. The restore pairresync operation can
only be performed on split pairs, not on suspended pairs. The restore pairresync operation
cannot be performed on L2 cascade pairs.

The P-VOL remains read-enabled during the restore pairresync operation only to enable the
volume to be recognized by the host. The data on the P-VOL is not guaranteed until the restore
pairresync operation is complete and the status changes to PAIR.

Page 4-23
Hitachi ShadowImage Replication Operations with Replication Manager
Commands

 pairresync – Quick restore


• Swap of LDEV ID takes place
 Can only be done from L1 to P-VOL
 Swap and freeze option available
PVOL SVOL

Time Frame VOL Status Host Access VOL Status Host Access

Before PSUS R/W SSUS R/W

During COPY(RS-R) NA COPY(RS-R) R

After PAIR R/W PAIR R

The Quick Restore operation speeds up the reverse resync operation by changing the volume
map to swap the contents of the P-VOL and S-VOL without copying the S-VOL data to the P-
VOL. P-VOL and S-VOL are resynchronized when update copy operations are performed for
pairs in the PAIR status. The pair status during a Quick Restore operation is COPY(RS-R) until
the volume map change is complete. P-VOL and S-VOL become inaccessible to all hosts for
write operations during a quick restore operation. Quick restore cannot be performed on L2
cascade pairs.

The P-VOL remains read-enabled during the Quick Restore operation only to enable the volume
to be recognized by the host. The data on the P-VOL is not guaranteed until the Quick Restore
operation is complete and the status changes to PAIR.

Page 4-24
Hitachi ShadowImage Replication Operations with Replication Manager
Commands

 Quick restore with or without Swap and Freeze option


The P-VOL remains unchanged after the quick
restore operation, because the Swap and Freeze
option suppresses Update Copy operations.
P-VOL S-VOL
With Swap and Freeze Option
P-VOL S-VOL 12345 abcde
P-VOL S-VOL

After a PAIR
abcde 12345 Quick Restore 123456 abcde
Operation while

PSUS PAIR P-VOL S-VOL

abcde, 12345: User Data Without Swap and Freeze Option 12345 12345

PAIR

The P-VOL and S-VOL are resynchronized


when ordinary Update Copy operations are
performed after the Quick Restore operation.

The Swap&Freeze option allows the S-VOLs of a ShadowImage pair to remain unchanged
after the Quick Restore operation. If the Quick Restore operation is performed on a
ShadowImage pair with the Swap and Freeze option, Update Copy operations are suppressed,
and are thus not performed for pairs in the PAIR status after the Quick Restore operation. If the
quick restore operation is performed without the Swap and Freeze option, the P-VOL and S-VOL
are resynchronized when Update Copy operations are performed for pairs in the PAIR status.

Note: Make sure that the Swap and Freeze option remains in effect until the pair status changes
to PAIR after the quick restore operation. The quick restore is done from CCI but the Swap and
Freeze option is set from Storage Navigator.

Page 4-25
Hitachi ShadowImage Replication Operations with Replication Manager
Commands

 pairsplit –E (suspend)
• Immediate access to S-VOL
 No update copy
• Forces an initial copy on resync
 Marks the entire P-VOL as dirty

P-VOL S-VOL

Time Frame VOL Status Host Access VOL Status Host Access

Before PAIR R/W PAIR R

After PSUE R/W SSUE R/W

The ShadowImage pairsplit -E operation suspends the ShadowImage copy operations to the S-
VOL of the pair. A user can suspend a ShadowImage pair at any time. When a ShadowImage
pair is suspended (status = PSUE) the system stops performing ShadowImage copy operations
to the S-VOL, continues accepting write I/O operations to the P-VOL and marks the entire P-
VOL track map as difference data. When a pairresync operation is performed on a suspended
pair, the entire P-VOL is copied to the S-VOL. The reverse and quick restore pairresync
operations cannot be performed on suspended pairs.

The subsystem automatically suspends a ShadowImage pair when it cannot keep the pair
mirrored for any reason. When the subsystem suspends a pair, sense information is generated
to notify the host. The subsystem automatically suspends a pair under the following conditions:

• When the ShadowImage volume pair has been suspended or deleted from the UNIX/PC
server host using CCI

• When the storage system detects an error condition related to an update copy operation

• When the P-VOL and/or S-VOL track map in shared memory is lost (for example, due to
offline microprogram exchange). This applies to COPY(SP) and PSUS(SP) pairs only. For
PAIR, PSUS, COPY(RS), or COPY(RS-R) pairs, the pair is not suspended, but the entire
P-VOL (S-VOL for reverse or quick restore pairresync) is marked as difference data.

Page 4-26
Hitachi ShadowImage Replication Operations with Replication Manager
Launching ShadowImage Operations

Launching ShadowImage Operations

 Perform the following


tasks before starting
ShadowImage operations:

1. Identify the volumes


to be used in copy
pairs

2. Ensure that the secondary volumes are


not in use
3. Plan for the naming of pairs and groups

 Launching pair management operations

Pair Management operations can be launched from either the Host view or the Storage
System view.

Page 4-27
Hitachi ShadowImage Replication Operations with Replication Manager
Pair Configuration

Pair Configuration

 Pair configuration wizard — 1. Introduction

To define copy pair configurations, you should first register a new pair group and define a list of
volume pairs to assign to the pair groups.

Pair groups can be created on the 2. Pair Association page of the Pair Configuration Wizard.

 Pair configuration wizard – 2. Pair Association – Add pair group –


Specify copy topology

Page 4-28
Hitachi ShadowImage Replication Operations with Replication Manager
Pair Configuration

 Detail window – Specifying pair volumes (source and destination)

 Pair list (left area)


• Allows you to filter volumes listed on left side
• Shows the pair volumes (P-VOL and S-VOL)

Page 4-29
Hitachi ShadowImage Replication Operations with Replication Manager
Pair Configuration

 Candidate list (right area)


• Select primary or secondary volumes to add to pair list

To define a copy pair:

1. In the Pairs pane under Detail of pair-group-name pane, select a primary volume.

2. In the Criteria tab under the Candidate List pane, specify the volume type and
optional filtering criteria for obtaining a list of candidate volumes.

3. Click Apply. The filtered list of candidate volumes is displayed on the Result tab.

4. From the displayed tree structure on the Results tab, select the candidate volumes that
you want to assign as the primary volume or secondary volumes for the new copy pairs.
You can select multiple volumes on the Result tab.

5. Click Add.

6. The selected volumes are assigned as secondary volumes and the defined copy pair is
displayed in the Pair List pane. Repeat this operation for each pair group you create.

7. Click Next to continue creating the copy pair configuration definition or click Save to
temporarily save the workflow.

8. The primary and secondary volumes must be configured in a one-to-one


correspondence before you can continue pair configuration

Page 4-30
Hitachi ShadowImage Replication Operations with Replication Manager
Pair Configuration

 Candidate list (right area)


• Select primary or secondary volumes to add to pair list

 Candidate list (right area)


• Select primary or secondary volumes to add to pair list

Page 4-31
Hitachi ShadowImage Replication Operations with Replication Manager
Pair Configuration

 Group management – Understanding create group parameters

The next screen accepts the copy group parameters.

Page 4-32
Hitachi ShadowImage Replication Operations with Replication Manager
Pair Configuration

 Pair configuration wizard – 3. Group Management – Creating a copy


group

A copy group consists of a number of copy pairs that have been grouped for management
purposes. By grouping the copy pairs of volumes that are used for the same operations and
purposes, you can perform batch operations on all the copy pairs in that copy group. For
example, by performing an operation such as changing the copy pair status on a copy group,
you can change the copy pair status of all copy pairs in the copy group in a single operation.

Specify copy group name and CCI configuration definition file related information (instance
number and communication port number).

The CCI configuration file can be placed separately for the primary and secondary volume
instances.

The access/operations on the copy group is defined by the access permission on the pair
management server on which the copy group is defined. The access permissions to the pair
management server is in turn defined by the resource group to which the server belongs.

Page 4-33
Hitachi ShadowImage Replication Operations with Replication Manager
Pair Configuration

 Assigning pair to copy group

Pair Type displays: +1/0/-0


# of newly added pair instances
# of originally existing pair instances
# of deleted pair instances

Initially the topology view displays copy group attribute as Unassigned.

Select the checkbox for the pair and click Apply to add the pair group instance to the newly
created copy group instance.

After this step the topology view displays copy group attribute as Assigned.

Page 4-34
Hitachi ShadowImage Replication Operations with Replication Manager
Pair Configuration

 Pair configuration wizard – 4. Task Management – Set task options –


Schedule

Select whether to execute the tasks immediately, or at a specified date and time.

Execute Immediately – If you want to execute the task immediately, select this radio button.
The task will start when the pair configuration wizard ends.

Execution Date – Select this radio button to execute the task at the specific date and time
that you select from the drop-down list.

Modify Pair Configuration File Only (Do not create Pair) – Select this check box if you do
not want the task to create a copy pair. When the check box is selected, the task only modifies
the CCI configuration definition file. This item is displayed when the task type is create.

Page 4-35
Hitachi ShadowImage Replication Operations with Replication Manager
Pair Configuration

 Setting additional pair settings

 Pair configuration wizard – 5. Confirm – Final confirmation

Page 4-36
Hitachi ShadowImage Replication Operations with Replication Manager
Checking Task Status

Checking Task Status

 Tasks – Check status of task

Task Status displays the execution status of the task as one of the following:

• Ready: Indicates that the task is waiting to execute.

• Executing: Indicates that the task is executing.

• Cancel: Indicates that the task was cancelled.

• Failure: Indicates that the task failed. When you select Failure, an error window appears.
Read the message in the error window.

• Success: Indicates that the task was successful.

• Warning: Indicates that the system timed out waiting for the task to finish processing.
When you select Warning, an error window appears. Read the message in the error
window.

Page 4-37
Hitachi ShadowImage Replication Operations with Replication Manager
Checking Pair Status

Checking Pair Status

 Pair status icons

 Hosts view

Page 4-38
Hitachi ShadowImage Replication Operations with Replication Manager
Checking Pair Status

 Storage systems view

 Pair configurations view

Page 4-39
Hitachi ShadowImage Replication Operations with Replication Manager
Changing Pair Status

Changing Pair Status

 Change pair status wizard – 1. Introduction – Launching change pair


status

 Change pair status wizard – 2. Select Copy Pairs – Select copy pairs
for status change

Page 4-40
Hitachi ShadowImage Replication Operations with Replication Manager
Changing Pair Status

 Change pair status wizard – 3. Select Pair Operation and Additional


Options

 Change pair status wizard – 4. Set Schedule – Scheduling the


operation

Page 4-41
Hitachi ShadowImage Replication Operations with Replication Manager
Changing Pair Status

 Change pair status wizard – 5. Confirm – Confirming pair change


operation

Page 4-42
Hitachi ShadowImage Replication Operations with Replication Manager
Instructor Demonstration

Instructor Demonstration

 ShadowImage
• Create pair
• Split pair
• Resync pair
• Delete pair
Instructor
Demonstration

Page 4-43
Hitachi ShadowImage Replication Operations with Replication Manager
Module Summary

Module Summary

 In this module, you should have learned to:


• Describe licensing considerations for Hitachi ShadowImage Replication
• Describe the key features
• Perform ShadowImage operations through Replication Manager

Module Review

1. How many ShadowImage level 1 copies can be created for a given


P-VOL?
2. What information is tracked through the bitmaps?

3. What is the difference between Quick and Normal functions?

Page 4-44
5. Hitachi Copy-on-Write Snapshot
Operations with Replication Manager
Module Objectives

 Upon completion of this module, you should be able to:


• Describe the purpose of Hitachi Copy-on-Write Snapshot
• Compare the functionality of Copy-on-Write Snapshot to Hitachi
ShadowImage Heterogeneous Replication
• Describe typical Copy-on-Write Snapshot operations
• Perform Copy-On-Write operations using Replication Manager

Page 5-1
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Hitachi Replication Products

Hitachi Replication Products

 Copy-on-Write Snapshot
Primary Host Secondary Host
• Provides nondisruptive, volume
snapshots

• Uses less space than ShadowImage Read Write Read Write

• Allows multiple frequent, cost-effective,


point-in-time copies P-VOL

Differentia Virtual Volume


• Immediate read/write access to virtual l Data
POOL
10:00
am
11:00
am
12:00
pm
copy Save

• Nearly instant restore from any copy

Copy-on-Write Snapshot

Rapidly creates point-in-time snapshot copies of any data volume within Hitachi storage
systems, without impacting host service or performance levels.

Realizes significant savings compared to full cloning methods because these snapshots store
only the changed data blocks in the Copy-on-Write Snapshot storage pool.

Requires substantially smaller storage capacity for each snapshot copy than the source volume.

Page 5-2
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Copy-on-Write Snapshot Purposes

Copy-on-Write Snapshot Purposes

 Allows you to retain a logical duplicate of the primary volume data


internally

 Restores data during the snapshot instruction if a logical error occurs in


the primary volume

 Creates a duplicated volume consisting of physical data stored in the


primary volume and differential data stored in the data pool

 Can create up to 64 snapshot images (V-VOLs) per primary volume


(enterprise storage)

 Requires ShadowImage Heterogeneous Replication license to be


installed

The duplicated volume of the Copy-on-Write Snapshot function consists of physical data stored
in the primary volume and differential data stored in the data pool. This differs from the
ShadowImage function where all the data is retained in the secondary volume.

Although the capacity of the used data pool is smaller than that of the primary volume, a
duplicated volume can be created logically when the snapshot instruction is given. The data
pool can share two or more primary volumes and the differential data of two or more duplicated
volumes.

Copy-on-Write Snapshot requires ShadowImage Heterogeneous Replication license to be


installed.

Capacity used will be subtracted from the license capacity for ShadowImage. Therefore,
you must ensure that the license capacity for ShadowImage is larger than the capacity
to be used by both ShadowImage and Copy-on-Write Snapshot.

Page 5-3
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Copy-on-Write Snapshot Overview

Copy-on-Write Snapshot Overview

The Copy-on-Write Snapshot configuration includes a P-VOL, a number of V-VOLs and a data
pool (Pool).

• Data pool: Volumes in which only differential data is stored (Pool)

• Snapshot Image: A virtual replica volume for the primary volume (V-VOL); this is an
internal volume that is held for restoration purposes.

Page 5-4
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Comparison

Comparison

 ShadowImage  Copy-on-Write Snapshot


• Full volume copy to • Changed data is saved from primary volume
S-VOL (P-VOL) to data pool
• Access to S-VOL does • Pool is shared by multiple snapshot images (V-VOL)
not affect P-VOL • Access to V-VOLs is actually access to P-VOL and pool:
performance Can impact P-VOL performance

Main Server Backup Server Main Server Backup Server

Read Write Read Write Read Write Read Write


Virtual Volume
P-VOL S-VOL P-VOL
V-VOL V-VOL V-VOL
Differential
S-VOL Data Save

S-VOL Pool Link

Copy-on-Write Snapshot requires ShadowImage Heterogeneous Replication to function. Both


are similar as both program products create pairs. However, Copy-on-Write Snapshot uses V-
VOLs as a secondary volume which reduces disk subsystem capacity requirements. Additionally,
only the updated data of the P-VOL is copied to the V-VOL and the differential data is stored in
the pool. Therefore, the replicated volume created by Copy-on-Write Snapshot comprises of
differential data stored in the pool and physical data from the primary volume (P-VOL).
ShadowImage, which creates pairs from physical volumes, copies all of the data in the P-VOL to
the S-VOL. This copy process requires more time but provides increased data security.

Warning: Copy-on-Write Snapshot copies only the updated data in a P-VOL to the V-VOL.
Therefore, data in the V-VOL is not guaranteed in the following cases:

• When physical failure occurs in a P-VOL

• When a pool is blocked

Size of Physical Volume

The P-VOL and the S-VOL have exactly the same size in ShadowImage Heterogeneous
Replication. In Copy-on-Write Snapshot, less disk space is required for building a V-VOL image
since only part of the V-VOL is on the pool and the rest is still on the primary volume.

Pair Configuration

Up to 9 copies can be created for every P-VOL in ShadowImage Heterogeneous Replication. In


Copy-on-Write Snapshot there can be up to 64 V-VOLs per primary volume.

Page 5-5
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Comparison

 ShadowImage copies and produces physically separate copies of the


data. Copy-on-Write Snapshot produces a virtual S-VOL (V-VOL).

1:3 1:64

Page 5-6
Hitachi Copy-on-Write Snapshot Operations with Replication Manager

The capacity of a pool is equal to the total capacity of registered pool-VOLs in the pool. If the
usage rate of the pool exceeds its capacity, the status of the Copy-on-Write Snapshot pair
changes to PSUE (Pair Suspended Error - status when failure occurred).

If this happens, snapshot data cannot be stored in the pool and the Copy-on-Write Snapshot
pair must be deleted. When a Copy-on-Write Snapshot pair is deleted, the snapshot data stored
in the pool is deleted and the P-VOL and V-VOL relationship is released.

Page 5-7
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Copy-on-Write Snapshot Operations

Copy-on-Write Snapshot Operations

 Before modifying the data block on P-VOL with Copy-On-Write


Snapshot, V-VOL will get the data from P-VOL
(2) Snapshot created
on Tuesday

P-VOL V02 Tuesday


Link
(1) Snapshot created
on Monday

Pool Link V01 Monday

Physical VOL Virtual VOL (V-VOL)

This diagram shows a situation where two snapshots have been taken. The highlighted data block in the
snapshots is available on the primary volume and a request for this block through the V-VOL would be
physically taken from the P-VOL.

This situation will last as long as the corresponding block on the P-VOL is not altered.

 After writing to the data block on P-VOL:


• When there is a write after having created a snapshot, that data is saved on the
data pool area (pool) first
• From now on, this saved data in the pool is used by the V-VOLs
• Pool can be shared by multiple V-VOLs; therefore only one copy of the data is
required
(4) Data saved onto (2) Snapshot created
(3) Write on
Pool. At this time, V01 V02 on Tuesday
Tuesday P-VOL
and V02, which has Tuesday
Snapshot images, will
refer to this data. (1) Snapshot created
on Monday

Link V01
Pool Monday

Physical VOL Virtual VOL(V-VOL )

Now the data block on the P-VOL needs to be written. However, before the actual write is executed, the
block is copied to the pool area. The set of pointers that actually represent the V-VOL will be updated and
if there is a request now for the original block through a V-VOL, the block is physically taken from the
Pool. From the host's perspective, the V-VOL (snapshot image) has not changed, which was the plan.

Page 5-8
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Copy-on-Write Snapshot Operations

 On Wednesday a new snapshot image is created


• The new snapshot image refers to the data on P-VOL
(2) Snapshot
created on
Wednesday

P-VOL V03 Wednesday

Link
(1) Data for Monday
and Tuesday is already V02 Tuesday
saved on Pool.

Link
V01 Monday
Pool
Physical VOL Virtual VOL (V-VOL)

On Wednesday, another snapshot image has been created. The situation now is that the data
block, as it was before, the write will physically be taken from the pool area and the block as it
is after the write from the primary volume (P-VOL).

If there is a request for that block through a V-VOL, the data will physically be read from the
Pool area or from the P-VOL depending on what snapshot image is being referred to.

 Write command to the same data block again


• When there is another write to the same area, before actually writing, the
data is additionally saved on the data pool area (pool)
• This saved data in Pool is viewed by the V-VOL that was linked to the
P-VOL before the write

One more write to the same data block on the P-VOL. Again, before executing the write, the
block is copied to the pool area, the pointers that make up the V-VOL are updated and upon a
request for that data block through a V-VOL the data will physically be taken from the pool.

Page 5-9
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Copy-on-Write Snapshot Operations

 Restore is possible from any Hitachi snapshot image (V-VOL)


• To the host, it appears as if the restore was done instantly
• Actual data copying from V-VOL to P-VOL is done in background
Main Server
Read/Write is possible
immediately after restore
command. V-VOL01
Read Write

V-VOL02
Restore
P-VOL
Only differential data is copied.
V-VOL03

Restoring a primary volume can be done instantly from any V-VOL. It can be done instantly
because it does not involve immediate moving of data from Pool to P-VOL. Only pointers must
be modified.

The background data will then be copied from the pool to P-VOL.

If the P-VOL becomes physically damaged, all V-VOLs would be destroyed and a restore is not
possible.

Page 5-10
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Setting Up Copy-on-Write

Setting Up Copy-on-Write

 Overview of Copy-on-Write Snapshot setup and configuration


• Install additional shared memory for differential tables (if required)

• Install additional shared memory for the V-VOL management area

• Install ShadowImage license keys if not already installed. ShadowImage


is required for Copy-on-Write Snapshot

• Install Copy-on-Write Snapshot license key

Copy-on-Write Operations

 Operation steps
• Set up Copy-On-Write pool
• Set up virtual volumes for Copy-On-Write operations
• Manage pairs
 Create pairs
 Change pair status

Page 5-11
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Setting Up the Data Pool

Setting Up the Data Pool

 Create pool wizard – 1. Select Pool ID – Creating the data pool

1. From the Explorer menu, choose Resources and then Storage Systems. The
storage systems subwindow appears.

2. Expand the object tree and then select a storage system under Storage Systems. The
storage-system-name subwindow appears.

3. Click the Open link. The Open subwindow appears.

4. On the Pools tab select Pool sub tab and click Create Pool. The Create Pool Wizard
starts.

Page 5-12
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Setting Up the Data Pool

 Create pool wizard – 2. Select Pool Volumes – Adding capacity to


the data pool

 Create pool wizard – 3. Set Option – Specify capacity alert threshold

Page 5-13
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Setting Up the Data Pool

 Create pool wizard – 4. Confirm – Confirming pool creation

 Changing data pool capacity and threshold settings

The new pool usage threshold has to be set higher than the existing pool usage.

Page 5-14
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Creating Virtual Volumes

Creating Virtual Volumes

 Create V-VOL wizard – 1. Setup V-VOL Group – Create virtual


volumes for snapshots

Replication Manager supports creation of V-VOLs on storage system configurations.

After creating V-VOLs, it is necessary to assign LUNs to them to create copy pairs. Assignment of
LUNs should be done using Device Manager.

Replication Manager provides a wizard for creating V-VOLs and associating them with volume pools.

 Create V-VOL wizard – 2. Select Primary LDEVs (Volumes) –


Select primary volume and specify count of snapshots

The primary volume capacity is used for defining the capacity of the new V-VOLs.

Page 5-15
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Creating Virtual Volumes

 Create V-VOL wizard – 3. Setup V-VOLs – Specify starting


CU:LDEV address for V-VOLs

In case you do not specify the starting address, the first available address will be assigned. The
address for new volumes will be displayed on successful creation of the V-VOLs.

 Create V-VOL wizard – 4. Confirm – Task summary screen

Page 5-16
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Creating Virtual Volumes

 Review the setup and note the LDEV addresses

Review the setup information and note the LDEV addresses. You must map these virtual
volumes to the storage port (use Hitachi Device Manager) before using for pair operations.

Page 5-17
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Managing Pairs

Managing Pairs

 Creating pair with copy type – CoW

Launch pair configuration wizard and create pair with copy type “QS/COW/TI.”

 Specifying additional settings for pair (like CoW pool ID, CTGID)

Page 5-18
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Instructor Demonstration

Instructor Demonstration
 Copy-On-Write
• Create pool/V-VOLs
• Create pair
• Split pair
• Resync pair
Instructor
• Delete pair Demonstration

Page 5-19
Hitachi Copy-on-Write Snapshot Operations with Replication Manager
Module Summary

Module Summary

 In this module, you have learned how to:


• Describe the purpose of Hitachi Copy-on-Write Snapshot
• Compare the functionality of Copy-on-Write Snapshot to Hitachi
ShadowImage Heterogeneous Replication
• Describe typical Copy-on-Write Snapshot operations
• Perform Copy-On-Write operations using Replication Manager

Module Review

1. How many copies of a volume can be created with Copy-On-Write?


2. Can a S-VOL be used for restoring the data in case of P-VOL
failure?

3. Which option can be recommended for a customer who wants to


maintain a copy of the primary volume with minimum investment in
additional capacity?

Page 5-20
6. Hitachi Thin Image Operations with
Replication Manager
Module Objectives

 Upon completion of this module, you should be able to:


• Describe the purpose of Hitachi Thin Image (HTI)
• Compare the functionality of Thin Image to Hitachi ShadowImage and Copy-
on-Write
• Describe Thin Image operations

Page 6-1
Hitachi Thin Image Operations with Replication Manager
What is Hitachi Thin Image?

What is Hitachi Thin Image?

 Creates instant copies of data for backup or application testing


purposes
 Saves up to 90% or more disk space by storing only changed data
blocks Read Write
 Enables fast and frequent point-in-time snapshots for disk based
backups
P - VOL
• Speeds backups from hours to a few minutes
Only Changed
• Virtually eliminates traditional backup windows
Data Saved
 Rapidly create up to 1,024 snapshots
Pool
• Great for application testing
• Multiple backups limit exposure to data loss

 High-performance data restoration


• Near-instant restore of critical data V - VOL V - VOL V - VOL
• Application and OS independent but can be integrated with application
backup triggers Virtual Volumes
Fast, simple and reliable snapshot software

Hitachi Thin Image snapshot software enables rapid copy creation for immediate use in decision
support, software testing and development and data protection operations.

Page 6-2
Hitachi Thin Image Operations with Replication Manager
Operations

Operations

 Overview – Copy-on-Write and Thin Image in Copy-on-Write mode

1. Host writes to 3. I/O complete goes


cache back

4. New data block


moved to P-VOL 2. If not previously
moved (overwrite
condition), old data
block moved to pool

P-VOL S-VOL Pool

Copy-on-Write Method Workflow

In the Copy-on-Write method, store snapshot data in the following steps:

1. The host writes data to a P-VOL.

2. Snapshot data for the P-VOL is stored.

3. The write completion status is returned to the host after the snapshot data is stored.

Page 6-3
Hitachi Thin Image Operations with Replication Manager
Operations

 Overview – Thin Image Copy After Write Mode (CAW)

1. Host writes to 2. I/O complete goes


cache back

4. New data block


moved to P-VOL 3. If not previously
moved (overwrite
condition), old data
block moved to pool

P-VOL S-VOL Pool

Copy After Write (CAW) Method Workflow

In the CAW method, store snapshot data in the following steps:

1. The host writes data to a P-VOL.

2. The write completion status is returned to the host before the snapshot data is stored.

3. Snapshot data for the P-VOL is stored in the background.

Page 6-4
Hitachi Thin Image Operations with Replication Manager
Thin Image Configuration

Thin Image Configuration

 Thin Image creates a virtual, point-in-time copy of a data volume.


• The configuration is same as Copy-on-Write
• Thin Image stores changed data blocks in the HTI pool
• Secondary volume (referred as V-VOL) is made up of pointers to the data in
the P-VOL and data pool V-VOL pointers show 07:00 snapshot data

I/O

Primary Host
P-VOL V-VOL
HTI pool keeps data at 7:00 if data in P-VOL (Snapshot of 7:00)
is modified
HTI Pool

Data at 7:00 Data modified after 7:00

HTI = Hitachi Thin Image

Thin Image stores snapshots, or a duplicate, of your data. You can create up to 1,024
snapshots of data using Thin Image. You can use this data in open-system volumes. If a data
storage failure occurs in your storage system, you can use the snapshot data to restore the
data.

Snapshot data is a copy of updated data in Thin Image P-VOLs. When updating the P-VOL, only
the updated data is copied as snapshot data in pool volumes (pool-VOL) before updating. This
processing is referred to as storing snapshot data. Create Thin Image pairs so that you can
store snapshot data. The P-VOL of a Thin Image pair is a logical volume. The S-VOL of a Thin
Image pair is a V-VOL.

Dynamic Provisioning is required to use Thin Image. Dynamic Provisioning accesses data in pool
volumes by way of V-VOLs, and can handle data in open-system servers such as UNIX and PC
servers. A Dynamic Provisioning license is required. The licensed capacity for Dynamic
Provisioning is calculated based on the capacity of pool-VOLs for Thin Image and Dynamic
Provisioning.

Page 6-5
Hitachi Thin Image Operations with Replication Manager
Comparison — Thin Image and ShadowImage

Comparison — Thin Image and ShadowImage

ShadowImage Thin Image snapshot software


Replication software
Only changed (old) data is saved from P-VOL to data pool;
All data is saved from P-VOL to pool is shared by multiple snapshot images (V-VOL)
S-VOL
Main Server Backup Server
Main Server Backup Server

Read Write Read Write


Read Write Read Write
Virtual Volumes
P-VOL
P-VOL S-VOL Differential V-VOL V-VOL VOL
Data Save

Pool
Link
Consistent read and read/write access is available only in split states.

Size of Physical Volume: The P-VOL and the S-VOL have exactly the same size in
ShadowImage Replication software. In Thin Image snapshot software, less disk space is
required for building a V-VOL image since only part of the V-VOL is on the pool and the rest is
still on the primary volume.

Pair Configuration: Only one S-VOL can be created for every P-VOL in ShadowImage
Replication software. In Thin Image snapshot software there can be up to 1024 V-VOLs per
primary volume.

Restore: A primary volume can only be restored from the corresponding secondary volume in
ShadowImage Replication software. With Thin Image snapshot software the primary volume
can be restored from any snapshot Image (V-VOL).

Page 6-6
Hitachi Thin Image Operations with Replication Manager
Comparison — Thin Image and ShadowImage

 Simple positioning
• Clones should be positioned for data repurposing and data protection (for example, DR
testing) where performance is a primary concern
• Snapshots should be positioned for data protection (for example, backup) only where
space saving is the primary concern
ShadowImage Thin Image
P-VOL = S-VOL P-VOL ≥ V-VOL
SIZE OF PHYSICAL P-VOL = S-VOL P-VOL ≥ V-VOL

VOLUME

1:3/9 1:1024

P-VOL S-VOL P-VOL


PAIR CONFIGURATION
V-VOL V-VOL V-VOL V-VOL

P-VOL can be restores from S-VOL Restore from any V-VOL


P-VOL
RESTORE P-VOL S-VOL
V-VOL V-VOL
….. V-VOL V-VOL

Page 6-7
Hitachi Thin Image Operations with Replication Manager
Comparison — Copy-on-Write and Thin Image

Comparison — Copy-on-Write and Thin Image

Feature Copy-on-Write Thin Image


Number of generations per 16k 32k
system
Number of P-VOL 16k 16k
Number of generations per 1-64 1-1024
P-VOL
Pool capacity per pool 30TB 4PB
Pool capacity per system 30TB 5PB

Pool capacity per P-VOL 30TB 768TB


Number of pools per system 128 128
Copy method Copy-on-Write CAW/Copy-on-Write

CAW = Copy After Write

 Thin Image uses either Copy After Write (CAW) mode or


Copy-on-Write mode, depending on P-VOL and pool type

External
Normal VOL DP VOL
VOL
RAID-5 External External
RAID-5 RAID-1 Mixed
RAID-1 RAID-6 pool pool (V02
RAID-6 Pool pool
Pool (V01) and later)
Copy- Copy-on- Copy-on- Copy-on-
CAW CAW CAW CAW
on-Write Write Write Write
Note: If the cache write pending rate is 60% or more, Thin Image
shifts to Copy-on-Write mode to slow host writes

Page 6-8
Hitachi Thin Image Operations with Replication Manager
Specifications

Specifications

 P-VOL
Item Requirement
Volume type LUSE volume can be specified
You cannot specify the following volumes as P-VOLs
• Volumes used for pools
• Volumes used as S-VOLs of Copy-on-Write pairs or Thin
Image pair
Emulation type OPEN-V
Maximum number 16,384
Path definition Required
Maximum capacity 4TB

Note: LUSE P-VOL must be paired with an S-VOL of the same size and structure. For example, if
LUSE P-VOL is created by combining the volumes of 1GB, 2GB and 3GB in this order, you
must specify LUSE volume which has exactly the same size and combination order as
the S-VOL.

 V-VOL
Item Requirement
Volume type V-VOL
The following volumes cannot be used as snapshot
S-VOLs
• Volumes used as S-VOLs of Copy-on-Write pairs or Thin Image
pairs
• Volumes used by a pair or migration plan of another product
Emulation type OPEN-V
Maximum number 16,384
Path definition Required

Note: LUSE S-VOL must be paired with a P-VOL of the same size and structure.

Page 6-9
Hitachi Thin Image Operations with Replication Manager
Specifications

 Pools
Item Requirement
Pool-VOL capacity 8GB to 4TB
Maximum number of 1,024
pool-VOLs in a data pool
Hitachi Dynamic Provisioning, Thin Image, and Copy-on-Write share 128 pools of
1,024 volumes each
Expansion of data pool Allowed – even if snapshots are using the pool
capacity
Deletion of data pool Allowed – pairs using the pool must be deleted first

Use of external volumes in Allowed


pool
Data drive type SAS, NL-SAS, SATA and flash drive

Pool usage Copy-on-Write shares P-VOL and pool capacity with ShadowImage.
Thin Image shares P-VOL and pool capacity with Dynamic Provisioning

Notes:

• When internal volumes are used, pool-VOLs with different drive types cannot be used in
the same pool

• When external volumes are used, pool-VOLs with different drive types can be used in
the same pool for best performance, volumes with the same drive types are
recommended for a pool)

Best Practices for Pool Volumes

• Do not mix normal volumes and pool-VOLs in one parity group

• Make sure that pool-VOLs consist of LDEVs from multiple parity groups

• Assign different processor blade IDs* to LDEVs that make up pool-VOLs

The following volumes cannot be specified as pool-VOLs for Copy-on-Write and Thin Image:

• Volumes whose volume status is other than Normal or Normal (Quick Format). If a
volume is being blocked or copied, the volume cannot be specified.

• LUSE volumes

• Volumes that are already being used as Copy-on-Write or Thin Image P-VOLs or S-VOLs

• Volumes that are already contained in Thin Image, Copy-on-Write , Dynamic


Provisioning, or Dynamic Tiering pools

• Volumes used as migration plans or pair volumes for another program product

Page 6-10
Hitachi Thin Image Operations with Replication Manager
Specifications

• Volumes with Protect or Read Only attribute, or the "S-VOL Disable“ attribute setting in
the Data Retention Utility

• Cache Residency Manager volumes

• System disks
• Command devices

• High Availability Manager quorum disks

• External pool-VOLs whose cache mode is enabled and external pool-VOLs whose cache
mode is disabled cannot be used in the same data pool

• Volumes that are in different resource groups cannot be used in the same data pool

• Internal volumes and external volumes whose cache mode is disabled

• CLPR: A data pool cannot contain pool-VOLs that belong to different cache logical
partitions

o CLPRs in the parity group that belong to the pool-VOL cannot be changed

Page 6-11
Hitachi Thin Image Operations with Replication Manager
Thin Image Operations

Thin Image Operations

 HTI pair creation and management can be done within HCS, using HRpM and
HDvM
Pair Pool
Management Management
HRpM HRpM
Create a V-VOL Create a HTI Pool

HDvM
Allocate V-VOL to the host

HRpM
Create a HTI pair (Select HTI Pool in Edit Task Window)

HRpM HRpM
Monitor Pair Monitor HTI Pool Usage

HRpM HRpM
Manage Pair, Change Pair Status Expand the Pool

HCS = Hitachi Command Suite

HRpM = Hitachi Replication Manager

HDvM = Hitachi Device Manager

Page 6-12
Hitachi Thin Image Operations with Replication Manager
Module Summary

Module Summary

 In this module, you should have learned to:


• Describe the purpose of Hitachi Thin Image
• Compare the functionality of Thin Image to Hitachi ShadowImage and Copy-
on-Write
• Describe Thin Image operations

Page 6-13
Hitachi Thin Image Operations with Replication Manager
Module Review

Module Review

1. How many copies of a volume can be created with Thin Image?


2. What is the maximum capacity per HTI pool?
3. From where can Thin Image copy pairs can be managed?

Page 6-14
7. Hitachi TrueCopy Operations with
Replication Manager
Module Objectives

 Upon completion of this module, you should be able to:


• Describe key benefits and features of Hitachi TrueCopy Heterogeneous
Remote Replication bundle
• Describe TrueCopy remote replication solutions
• Describe TrueCopy internal operations
• Describe how TrueCopy and Hitachi ShadowImage Heterogeneous
Replication work together
• Perform TrueCopy operations with Hitachi Replication Manager

Page 7-1
Hitachi TrueCopy Operations with Replication Manager
Hitachi TrueCopy Benefits

Hitachi TrueCopy Benefits

 Storage system-based synchronous replication


• Host independent
 Applications not involved

 Multiple management options


• Hitachi Storage Navigator
 Manual operation of single replication
• Hitachi Replication Manager
 Semi-automated operation of multiple replications
• Command Line Interface (CLI)
 Fully automated replication using scripts

 Open systems and mainframe compatible

TrueCopy Remote Replication

Provides vital recovery management capabilities that safeguard information up to the point of
an outage in the event a primary site is damaged. As a result, TrueCopy synchronous helps
minimize the impact of business downtime. Data recovery processes traditionally span several
days and involve many administrators. Using a service-based implementation, TrueCopy
replaces these manual and time-consuming methods with automated copy processes.
Consequently, recovery time is significantly reduced, enabling normal business operations to
resume in a matter of minutes, not days.

Provides host-independent, vital recovery management capabilities that minimize the impact of
downtime and ensure nonstop access to your information in the event of a disaster or during
scheduled downtime. TrueCopy replicates data locally between Hitachi storage systems—within
the same data center or remotely, between dispersed locations.

Is a remote data replication solution for both mainframe and open systems environments.
TrueCopy synchronous replication is accomplished by continuously sending data copies between
one or more primary Hitachi storage systems to one or more secondary systems, located either
in the same data center or at a remote site.

TrueCopy synchronous is typically used for applications with the most stringent recovery-point
objectives where loss of data cannot be tolerated. TrueCopy synchronous is normally deployed
over distances of less than 100 kilometers (~60 miles), although it may be possible to extend
that distance to 300 kilometers (~190 miles).

Page 7-2
Hitachi TrueCopy Operations with Replication Manager
Hitachi TrueCopy Benefits

 Business continuity
• Remote replication with potential for zero data loss for synchronous distances
• Reduce frequency and duration of planned outages
• Disaster recovery testing
• Rapid recovery versus tape restore
• Government regulation compliance

 Productivity and process improvements


• Reduces application downtime for backups
• Resource optimization
• Nondisruptive LAN-free backups
• Tested maintenance operations applied to production environments
• Provide production data for offline data mining

Business Continuity

• Safeguards critical data by providing sophisticated point-in-time copies, up to the point


of an outage, in the event your primary site is damaged.

• Mitigates disaster risks by replicating data locally, or to geographically dispersed remote


sites.

• Replaces slow, labor-intensive, and expensive tape-based replication and retrievals with
rapid, automated processes.

• For selected industries, TrueCopy synchronous allows conformance with government


mandates or regulations for disaster tolerance.

Productivity and Process Improvements

• Improves service levels by reducing planned and unplanned downtime of customer-


facing applications.

• Optimizes resource utilization by offloading processing and data to alternate systems


that can be located anywhere.

• Performs backups on a secondary system while your business operates at full capacity.

• Simplifies site maintenance and application development operations by replicating your


environment at a local or remote site without impacting production data.

• Fully leverages data warehousing/data mining investments by replicating production


information, either locally or at a remote site.

Page 7-3
Hitachi TrueCopy Operations with Replication Manager
Hitachi TrueCopy Synchronous Benefits

Hitachi TrueCopy Synchronous Benefits

 Data migration between storage systems using TrueCopy synchronous


• Minimal impact to host applications:
 To load data onto new or scratch volumes (for example, a new or
upgraded storage system)
 To temporarily move data to accommodate other activities (for example,
repair)
 To relocate LUs to balance workloads and distribute I/O activity evenly
within and across storage systems to improve performance

LU = logical unit

 Data migration between storage systems using TrueCopy synchronous


• Initial copy operation copies the entire contents of the primary volume
(P-VOL) to the secondary volume (S-VOL) with minimal impact to host I/O
• The P-VOL and S-VOL are identical and synchronized when the initial copy
operation completes and the pair status changes from COPY to PAIR
• The pair is then deleted to change the device state to SMPL (simplex), and
migration can then be completed with minimal host disruption
• To support host-based application automation, manage data migration with
Command Control Interface (CCI)

Page 7-4
Hitachi TrueCopy Operations with Replication Manager
Remote Replication Solutions

Remote Replication Solutions

Hardware synchronous remote copy

 TrueCopy synchronous
• Host writes to remote cache, thus automatically maintaining data consistency
• Optional consistency groups (from CCI) to get time consistent split
• Remote delay inserted at end of channel program by microcode. Only one turnaround required
• Exclude temporary data
• Turnaround times become unacceptable at longer distances
• Appreciable delay at zero distance:
 500 to 700 µsec delay within control unit
 Total around 900 µsec at zero distance
• RPO near zero, RTO near zero

CCI = Command Control Interface

RPO = Recovery Point Objective

RTO = Recovery Time Objective

Page 7-5
Hitachi TrueCopy Operations with Replication Manager
Internal Operations of Synchronous Replication

Internal Operations of Synchronous Replication

Primary Secondary
Host Host

1. Write I/O 4. MCU returns


End Status
Primary
Recovery
Site 2. MCU Sends Data Site
Main Control Unit (MCU) 3. RCU Returns Remote Control Unit (RCU)
Acknowledgement
Hitachi Storage Navigator program
Hitachi Storage Navigator program

Issues Benefits
1. Performance impact (waiting for the 1. Highest degree of data currency
acknowledgement) 2. No loss of committed data
2. Distance limitation (up to 300 KM) 3. Fast data recovery (DR)
3. Database management system dependent writes (keeping
4. Simpler to configure versus asynchronous
the database in-sync with the log file)

This type of remote copy solution is implemented entirely in the Hitachi enterprise storage
systems microprogram and hardware. It is transparent to the host applications.

Operation

• The primary host server issues a write I/O to the primary control unit.

• The primary control unit sends the data to the secondary control unit.

• The secondary control unit sends an acknowledgement back to the primary control unit.

• The primary control unit sends status end to the host server. The application continues
execution.

Pros

• Hardware solution is transparent for operating systems and applications.

• No data loss.

• Data on primary and secondary volumes are synchronized.

• Relatively simple implementation.

• Makes for fast disaster recovery.

Page 7-6
Hitachi TrueCopy Operations with Replication Manager
Internal Operations of Synchronous Replication

Cons

• Short distances.

• Performance loss – Each write takes more time.

• Inability to handle “time-dependent” writes – therefore there is the potential data


integrity risk.

• This type of solution provides a guarantee that each transaction was written to disk, but
the performance of the user’s application may decrease as the distance to the remote
device increases. Depending upon the application, this could impair those that are
transaction intense. Since the user application waits for a confirmation that each
transaction has been written to disk, as the distance to the storage device increases, the
time to receive a response also increases.

• The risk in this situation is that although the confidence factor is high that the remote
storage device has data that is consistent, it is probable that the number of transactions
that can be processed per second will decrease. The rate of decrease will depend
directly on the distance to the remote location and how fast the device can turn around
an acknowledgement.

Page 7-7
Hitachi TrueCopy Operations with Replication Manager
Remote Replication Configurations

Remote Replication Configurations


 Two data center configuration

Hitachi offers powerful solutions for business continuity as well as for disaster recovery testing
while maintaining complete disaster recovery protection. The example above illustrates a typical
configuration that uses TrueCopy to maintain an exact, I/O consistent data volume at the
remote site. It also uses Hitachi ShadowImage Heterogeneous Replication to create a separate
copy for disaster recovery testing. This enables the organization to test its disaster recovery
plan with current data.

 TrueCopy synchronous configurations


Single Direction Both Directions
INI RCUT INI/ RCT RCUT
T /INI

P-VOL S-VOL P-VOL S-VOL


Sync Sync

P-VOL S-VOL S-VOL P-VOL


Sync Sync
MCU RCU MCU&RCU MCU&RCU

Single Direction — NOT Supported Single Direction


RCUT INI INI RCUT
INI RCUT RCU RCU
Sync CU# Sync
Sync S-VOL S-VOL S-VOL
P-VOL
CU# P-VOL P-VOL CU#

Sync S-VOL
P-VOL P-VOL

MCU RCU S-VOL Sync Sync S-VOL


CU# CU#
INI INI
RCU RCU
RCUT RCUT
MCU

Note that TrueCopy Synchronous has no inherent grouping capability. Pair operations occur
individually.

Page 7-8
Hitachi TrueCopy Operations with Replication Manager
TrueCopy and ShadowImage Together

TrueCopy and ShadowImage Together

 Primary and remote site backups


• ShadowImage Replication provides on-site backup copies for rapid recovery, split-
mirror backup, decision support, testing and development
• TrueCopy implementation provides a remote disaster restart and recovery capability

Local Data Center Remote Site

ShadowImage
TrueCopy S-VOL TrueCopy
P-VOL S-VOL ShadowImage
-------------------- --------------------
ShadowImage S-VOL
ShadowImage ShadowImage
P-VOL S-VOL P-VOL

Tape archive Tape archive

 Disaster recovery
• Test your remote disaster recovery plan nondisruptively with current
production data
Local Data Center Remote Copy Remote Site

TrueCopy ShadowImage
P-VOL T-VOL TrueCopy
S-VOL ShadowImage
--------------------
ShadowImage -------------------- S-VOL
P-VOL ShadowImage ShadowImage
P-VOL
S-VOL

Page 7-9
Hitachi TrueCopy Operations with Replication Manager
TrueCopy and ShadowImage Together

 Data distribution and migration


• TrueCopy and ShadowImage create and distribute multiple remote copies
• This provides immediate access to time-critical information
Local Data Center Remote Copy Remote Site

TrueCopy
TrueCopy S-VOL
P-VOL ShadowImage
--------------------
ShadowImage S-VOL
P-VOL

Decision Support

Page 7-10
Hitachi TrueCopy Operations with Replication Manager
TrueCopy Specifications

TrueCopy Specifications

 TrueCopy supports external devices, logical unit number expansion


(LUSE), Virtual LVI/logical unit number (LUN) and the Cache Residency
Manager feature

 Two LDEVs that comprise a TrueCopy synchronous volume pair must


be the same emulation type and same capacity
• LUSE volumes must be same emulation, size, and structure
 Example OPEN-V*6 to OPEN-V*6

 Not dependent on RAID levels and HDD types (different levels or types
can coexist)

TrueCopy defines the MCU and RCU relationship either at the CU level or CU free. If using CU
level, each LCU must have a unique four digit identifier called the storage system ID (SSID).
The SSIDs are assigned at installation.

TrueCopy Configuration Checklist

Configure TrueCopy
Links

Add RCU

Create Pairs

Page 7-11
Hitachi TrueCopy Operations with Replication Manager
Fibre Channel Links

Fibre Channel Links

 TrueCopy over Fibre Channel can be configured for the following types of connections:
• Direct connection
 Two systems are directly connected

• Switch connection
 Two systems are connected using switches
 Initiator ports cannot be connected to a host. Hard-zone switches can be added, if necessary, to prevent the
hosts from accessing initiator ports
 While RCU-target ports can be connected to hosts via a Fibre Channel switch, it is not recommended

 Other restrictions:
• LUNs cannot be mapped to initiator ports

• Hosts cannot access the initiator ports

• The topology for the Initiator and RCU target ports must be the same
For example, if the initiator port is set for Fabric=OFF, Fibre Channel Arbitrated Loop (FC-AL), then the RCU target
should also be set to FABRIC=OFF, FC-AL

Host Failover Software


Server Server Storage
Navigator
Storage
LAN Navigator LAN
SVP SVP
FC Connections (Links)
Initiator RCU Target

Initiator RCU Target S-VOLs


P-VOLs
Primary storage system (MCU) Remote storage system (RCU)

The transmitting port is known The receiving port is known as


Note: Links are single-direction (for data
as the Initiator Port. movement). If replication is desired in
the RCU Target Port.
both directions, links in the opposite
direction must be configured

The major components of a TrueCopy operation using Fibre Channel interface connections are:

• TrueCopy running on the storage system at the primary (production) site and on the
storage system at the secondary (recovery) site

o Storage system at primary site – Main Control Unit (MCU)

o Storage system at secondary site – Remote Control Unit (RCU)

Page 7-12
Hitachi TrueCopy Operations with Replication Manager
Fibre Channel Links

• TrueCopy synchronous volume pairs

o P-VOLs (production volumes) at primary site. The MCUs contain the P-VOLs,
containing the original data, and are online to hosts.

o S-VOLs (secondary volumes) at secondary site. The RCUs contain the S-VOLs,
which are the synchronous and asynchronous copies of the P-VOLs. S-VOLs can
be online to hosts only when pairs are SPLIT, SUSPENDED, or DELETED.

MCU — RCU Configuration

TrueCopy Synchronous supports one to N and N to one remote copy connections (N is less than
or equal to 4). One MCU can be connected to as many as four RCUs, and one RCU can be
connected to as many as four MCUs.

Note: Hitachi Data Systems strongly recommends that you establish at least two independent
remote copy connections (one per cluster) between each MCU and RCU to provide
hardware redundancy.

Page 7-13
Hitachi TrueCopy Operations with Replication Manager
Pair Operations

Pair Operations

 Remote copy operation types


MCU RCU

Initial Copy
P-VOL S-VOL
Operation

Server Update Copy


Write I/O
Operation

 Initial Copy Operations  Update Copy Operations


• Syncs P-VOL and S-VOL • Host issues Write I/O to P-VOL
independently of Host I/O
• Sync Mode: P-VOL and S-VOL kept in sync
• Add a pair (Paircreate),
Resume and Split Pair

TrueCopy synchronous operations involve the primary (main) systems and the secondary
(remote) systems (MCUs and RCUs). The MCUs contain the TrueCopy primary volumes (P-
VOLs), which contain the original data and are online to the host(s). The RCUs contain the
TrueCopy synchronous secondary volumes (S-VOLs), which are the synchronous copies of the
P-VOLs.

There are two types of copy operations:

• Initial Copy: All the content from the primary volume is copied to the secondary
volume. As soon as the initial copy starts, the secondary volume becomes unavailable
for any type of host I/O.

• Update Copy

Page 7-14
Hitachi TrueCopy Operations with Replication Manager
Differential Bitmap Function

Differential Bitmap Function


 TrueCopy differential bitmap function
• Differential bitmaps designate changes to primary data volumes during initial copy,
and to both primary and secondary volumes while pairs are split
• Differential bitmaps are maintained in shared/control memory for each pair
• Differential bitmaps operate at either track or cylinder level
 Denote changed tracks if volume size is less than 10,019 cylinders
 Denote changed cylinders if volume size is 10,019 cylinders or greater
• When data volumes are resynchronized, differential bitmaps for the two volumes are
merged and all changed tracks and cylinders are sent from primary to secondary
volumes
 Operating at cylinder level slows resynchronization of split pair (more data has to be
sent for any given change)

The differential data (updated by write I/Os during split or suspension) between the primary
data volume and the secondary data volume is stored in each bitmap. When a split/suspended
pair is resumed (pairresync), the primary storage system merges the primary data volume and
secondary data volume bitmaps, and the differential data is copied to the secondary data
volume.

 During initial copy


• Initial state
P-VOL | | | | | | | | | | | | | | | | | . . . .

P-VOL | X | X | X | | | X | | X | | | X | | | | . . . .

• Host write I/Os can change tracks during initial copy

S-VOL | X | X | X | | | X | | X | | | X | | | | . . . .

• Changed tracks sent to S-VOL after initial copy completes

This diagram illustrates the usage of differential bitmaps, after a paircreate command is issued.
While Initial Copy is operating, updates can occur to primary volumes. The P-VOL bitmaps
denote any changed tracks/cylinders that occur. Differential bitmapping for Universal Replicator,
TrueCopy Remote Replication, ShadowImage Replication, Hitachi Copy-on-Write Snapshot, and
Hitachi Thin Image work in exactly the same way.

Page 7-15
Hitachi TrueCopy Operations with Replication Manager
Differential Bitmap Function

 Pairsplit command
• While pairs are split, both P-VOL and S-VOL differential bitmaps may be
active
• Changed tracks on P-VOL

P-VOL | X | | X | | | | X | | X | | | | | | | | . . . .

• S-VOL mounted with Write Enabled

S-VOL | X | X | | | | | | | | | | | X | | | . . . .

This diagram illustrates the usage of differential bitmaps after a pair is suspended. While
suspended, updates can occur to the P-VOL. Changes can also occur on S-VOL if it is mounted
as Write Enabled. The bitmaps denote any changed tracks/cylinders while the pair is suspended.

 Resynchronization
• Changes that have occurred while pair is split

P-VOL |X| |X| | | |X| |X| | | | | | | |. . . .

S-VOL |X| | |X| | | | | | | |X| | | | | |. . .

• Merge
P-VOL |X| |X|X| | |X| |X| |X| | | | |. . . .

• P-VOL differential bitmap after OR operation

This diagram illustrates the usage of differential bitmaps after a pair is resynchronized.

While suspended, updates may occur to both primary and secondary volumes. The bitmaps
denote any changed tracks/cylinders while the pair is suspended. When the resync command
is issued, the S-VOL differential bitmaps are merged into the P-VOL differential bitmaps. Then
all of the changed data are copied from the P-VOL to the S-VOL. This process results in
overwrites for any changed S-VOL data.

Page 7-16
Hitachi TrueCopy Operations with Replication Manager
Advanced Pair Operations and Recovery Scenarios

Advanced Pair Operations and Recovery Scenarios


 Takeover
• If a primary site is damaged and operation cannot continue, the takeover
operation is used to immediately continue operations at a secondary site

 Takeback
• When the damaged primary site is recovered, the takeback operation is used
to immediately switch operations from the secondary site back to the primary
site

In addition to the basic pair operations (such as split and resync), the Change Pair Status
Wizard supports several advanced operations for open system pairs. The relationship between
the basic and advanced operations can be understood in terms of two scenarios:

• Takeover (the pair is split apart or swapped)

• Takeback (the pair is recovered)

Hitachi Open Remote Copy (HORC) Takeover Support


 Takeover – Server failure
• User can perform takeover for operating the application at remote site during
failure
• Automatically performs either of the following depending on failure type
Application Application

Reverse the copy pair direction.


(HRpM issues “horctakeover –S” command)

Page 7-17
Hitachi TrueCopy Operations with Replication Manager
Hitachi Open Remote Copy (HORC) Takeover Support

Hitachi Open Remote Copy (HORC) Takeover Support


 Takeover – Storage failure or remote failure
• User can perform takeover for operating the application in remote site during
the failure
• It automatically performs either of the following depending on the failure type
Application Application

Split the pair and makes S-VOLs available for application use.
(HRpM issues “horctakeover –S” command)

 Force-Split – Remote path or secondary site failure


• User can perform force-split for recovering application I/O to P-VOL when it is
suspended due to fence level option of TrueCopy sync pairs
• It makes P-VOL available even though P-VOL and S-VOL becomes non
identical
Application Application

Make P-VOL writable by allowing mismatch between P-VOL and S-VOL


(HRpM issues “horctakeover –l” command)

Fence level (data or status) option can be specified when creating pairs. This option is used for
guaranteeing the same data on P-VOL and S-VOL even when a failure has occurred.

If this option was enabled, application I/O to P-VOL returns error if storage system fails writing
to S-VOLs.

Page 7-18
Hitachi TrueCopy Operations with Replication Manager
Hitachi Open Remote Copy (HORC) Takeover Support

Hitachi Open Remote Copy (HORC) Takeover Support


 Swap
• User can perform swap for maintenance of primary site or remote path
• Swap is also used for failback (restore the copy pair back to P-VOL >
S-VOL direction)

Application Application Application

Suspend the pairs Reverse the pairs


(HRpM issues “pairsplit” (HRpM issues “pairresync –swaps” command)
command)

Swap operation performs two steps (suspend and reverse). If P-VOLs have any issues, this
operation fails at first suspend step and keeps the pair status as PAIR.

In maintenance scenario, keeping the PAIR status is preferable rather than forcefully making S-
VOLs available to applications. (takeover operation forcefully split pairs when there is an error
on P-VOL for making S-VOLs available).

Page 7-19
Hitachi TrueCopy Operations with Replication Manager
Hitachi Open Remote Copy (HORC) Takeover Support

Hitachi Open Remote Copy (HORC) Takeover Support


 Takeover-Recovery – Pair is not lost during the failure
• Pair status is other than SMPL
• User can perform takeover-recover for restoring data to primary site
• Depending on the pair status, user specifies whether to resynchronize or
recreate pairs
Application Application

Resynchronize the data from S-VOL to P-VOL.


(HRpM issues “pairresync –swaps” command)

There are two types of takeover-recovery operations:


• Takeover-recovery (resync)
• Takeover-recovery (recreate).
A takeover-recovery operation is used as a recovery procedure when a user performs a
takeover operation and the secondary volume changes to split (SSWS). The details of the
executed takeover-recovery operation depend on the options specified, so you must check the
copy pair status and select the options accordingly.

 Takeover-Recovery – Pair is lost during the failure


• Pair status is SMPL
• User can perform takeover-recover for restoring data to primary site
• Depending on the pair status, user specifies whether to resynchronize or
recreate pairs
Application Application

Recreate the pairs


(HRpM issues “pairsplit -R", “pairsplit –S” then “paircreate” command)

Page 7-20
Hitachi TrueCopy Operations with Replication Manager
Hitachi Open Remote Copy (HORC) Takeover Support

 Select pair operation – Basic and advanced options

Advanced
Operations

Advanced option for performing operations (takeover, swap, force-split and takeover-recovery).

Existing pair operations (split, resync, etc.) are categorized in Basic operations.

 Takeover-recovery operation provides additional parameters for resync


and recreating copy pairs

Page 7-21
Hitachi TrueCopy Operations with Replication Manager
Hitachi Open Remote Copy (HORC) Takeover Support

 Tasks view

Swap operation example

Confirm copy direction (reverse)

The result of Change Pair Status Wizard operation is registered as task. User can confirm
the status on the task view.

After task is completed, user can confirm the updated configuration or status on the copy group
window.

 Additional notes
• If copy direction is reversed by HORC takeover operations, the following
settings for the original copy groups become unavailable:
 Alert
 My Copy Group
 Scheduled execution of pairs
• If the above settings are required while the copy directions are reversed,
please reconfigure these settings
 When the reversed copy direction is brought back to the original, the
above settings for original copy groups become available again

Page 7-22
Hitachi TrueCopy Operations with Replication Manager
Pair Operations

Pair Operations

 paircreate – P-VOL fence level


• Never: P-VOL will never be fenced
• Data: P-VOL will be fenced when the MCU cannot successfully execute an
update copy operation for any reason
• Status: P-VOL will be fenced only if the MCU is not able to change the
S-VOL status to suspended when an update copy operation fails

P-VOL fence level (sync only): Select the fence level for the new pairs. Default – Never. The
fence level determines the conditions under which the MCU will reject write operations to the P-
VOL.

 paircreate – Initial copy parameters


• Initial copy:
 Entire volume: Copy all P-VOL data to S-VOL (default)
 No copy: Do not copy any P-VOL data to S-VOL
• Initial copy pace:
 Desired number of tracks to be copied at one time (1 –15) during the initial
copy operation

Page 7-23
Hitachi TrueCopy Operations with Replication Manager
Pair Operations

Initial Copy Parameters

Select the initial copy options for the new pairs. These options cannot be changed after a pair
has been added:

• Initial Copy:

o Entire — Copy all P-VOL data to S-VOL

o No Copy = do not copy any P-VOL data to S-VOL

o Default = Entire

WARNING: The user must ensure that the P-VOL and S-VOL are already identical
when using the No Copy setting.

• Initial Copy Pace: Desired number of tracks to be copied at one time (1-15) during
the initial copy operation. Default = 15.

Page 7-24
Hitachi TrueCopy Operations with Replication Manager
TrueCopy Operations

TrueCopy Operations

 Using Replication Manager


• Set up remote paths
• Manage pairs

Page 7-25
Hitachi TrueCopy Operations with Replication Manager
Setting Up Remote Paths

Setting Up Remote Paths

 Create remote path wizard – 1. Select Remote Storage System

Remote Paths are port connections between local and remote storage systems. These logical
routes are used by remote copy pairs for copying data from a P-VOL to an S-VOL.

Replication Manager allows remote path configuration for different replication technologies. You
must set up a remote path before you can use any of the following volume replication functions:

• TrueCopy

o For enterprise-class storage systems: Based on the copy direction, you specify
the port for the local storage system CU (MCU) and the port for the remote
storage system CU (RCU). Initiator and RCU target are set automatically as the
attributes of the specified ports.

o You can specify either CU free (recommended) (to connect only from the local
storage system to a remote storage system via a dynamically assigned MCU-RCU
pair) or CU specific (to connect each path via a specified MCU and RCU).

• Universal Replicator

o Using CU free, you can specify the port for the local storage system and the port
for the remote storage system. You must set paths for both directions. Initiator
and RCU target are set automatically as the attributes of the specified ports.

Page 7-26
Hitachi TrueCopy Operations with Replication Manager
Setting Up Remote Paths

 Create remote path wizard – 2. Define Remote Path – Assign path


label, path group ID, specify local and remote ports

Select reverse direction path checkbox should be checked only if reverse links are set up
between the two sites.

 Create remote path wizard – 3. Confirm – Reviewing and confirming


the task

Page 7-27
Hitachi TrueCopy Operations with Replication Manager
Setting Up Remote Paths

 Remote paths

 Adding or deleting ports to remote path

Page 7-28
Hitachi TrueCopy Operations with Replication Manager
Managing Pairs

Managing Pairs

 Setting up copy type as TCS (TrueCopy sync)


• Select P-VOL or S-VOL

 Setting up copy group


• Select pair management server, HORCM instance, UDP port and path group
ID

Page 7-29
Hitachi TrueCopy Operations with Replication Manager
Managing Pairs

 Specifying additional settings

 Pair configurations view

Page 7-30
Hitachi TrueCopy Operations with Replication Manager
Managing Pairs

 Pair status change

 Pair status change

Advanced
Operations

Page 7-31
Hitachi TrueCopy Operations with Replication Manager
Instructor Demonstration

Instructor Demonstration

 Hitachi TrueCopy
• Set up remote path
• Create pair
• Split pair
• Resync pair
Instructor
• Takeover Demonstration

Page 7-32
Hitachi TrueCopy Operations with Replication Manager
Module Summary

Module Summary

 In this module, you should have learned to:


• Describe Hitachi TrueCopy key benefits and features
• Describe TrueCopy remote replication solutions
• Describe TrueCopy internal operations
• Describe how TrueCopy and Hitachi ShadowImage Replication work together
• Perform TrueCopy operations with Hitachi Replication Manager

Page 7-33
Hitachi TrueCopy Operations with Replication Manager
Module Review

Module Review

1. What are the pre-requisites for performing TrueCopy operations from


Hitachi Replication Manager?
2. What is the minimum number of paths required to set up TrueCopy
operations from Site A to Site B?

3. What are takeover operations?

4. What is the default value for fence level?

Page 7-34
8. Hitachi Universal Replicator Operations
with Replication Manager
Module Objectives

 Upon completion of this module, you should be able to:


• Describe the features of the Hitachi Universal Replicator
• List the benefits
• List the specifications
• Describe the supported configurations
• Perform Hitachi Universal Replicator operations with Hitachi Replication
Manager

Page 8-1
Hitachi Universal Replicator Operations with Replication Manager
Hitachi Universal Replicator Overview

Hitachi Universal Replicator Overview

 Hitachi Universal Replicator (HUR) is an asynchronous, continuous,


nondisruptive, host-independent remote data replication solution for
disaster recovery (DR) or data migration over long distances

 HUR and Hitachi ShadowImage Heterogeneous Replication can be


used together in the same storage system and on the same volumes to
provide multiple copies of data at the primary and remote sites

 Hitachi TrueCopy and HUR can be combined to allow advanced three


data center (3DC) configurations for optimal data protection

Hitachi Universal Replicator delivers simplified asynchronous data replication solution for
enterprise storage. Universal Replicator is designed for organizations with demanding
heterogeneous data replication needs for business continuity or improved IT operations. HUR
delivers enterprise-class performance associated with storage system-based replication while
providing resilient business continuity without the need for redundant servers or replication
appliances.

Page 8-2
Hitachi Universal Replicator Operations with Replication Manager
Hitachi Universal Replicator Overview

 HUR benefits
• Ensures business continuity
• Optimizes resource use (lowers the cache and resource consumption on
production and primary storage systems)
• Improves bandwidth utilization and simplifies bandwidth planning
• Improves operational efficiency and resiliency (mitigates the impact of link
failures between sites)
• Provides more flexibility in trading off between Recovery Point Objective and
cost
• Implements advanced multi-data center support more easily
• Moves data among levels of tiered storage systems more easily

 Write I/O overview

3. Asynchronous
1. Write I/O remote copy
P-VOL JNL-VOL
JNL-VOL
Primary host 2. Write complete 4. Remote copy complete S-VOL

Primary Storage (MCU)


Secondary Storage (RCU)

The Host I/O process completes immediately after storing write data to the cache memory of
primary storage system main disk control unit (MCU). Then the data is asynchronously copied
to the secondary storage system remote disk control unit (RCU).

MCU will store data to be transferred in the journal cache, to be destaged to journal volume in
the event of link failure.

Universal Replicator provides consistency of copied data by maintaining write order in copy
process. To achieve this, HUR attaches write order information to the data in copy process.

Page 8-3
Hitachi Universal Replicator Operations with Replication Manager
Hitachi Universal Replicator Hardware

Hitachi Universal Replicator Hardware

 Remote connections (links)


• Bidirectional fibre connections to send and receive data between MCU and
RCU
• Minimum of four initiator ports, two in each system for redundancy
• Minimum of four RCU target ports, two (redundancy) in each system
• Unlike TrueCopy Remote Replication, Universal Replicator remote copy
connections (links) are not assigned to control units (CUs)
• Only Fibre Channel is supported
MCU RCU
Initiator RCU Target
Initiator RCU Target
RCU Target Initiator
RCU Target Initiator

Two fibre connections enable pathway to send and receive data.

At least 4 fiber connections are required:

• Fibre connection 1 makes a request to remote site

• Fibre connection 2 reads journal command and journal copy

Requires 4 reserved CHA ports

• Since CHA ports are configured in pairs, a total of 8 CHA ports will be reserved

Each site involved in data replication will include:

• Initiator > RCU target – Two initiator ports on each system for redundancy

• RCU target > initiator – Two RCU target ports on each system for redundancy

Page 8-4
Hitachi Universal Replicator Operations with Replication Manager
Hitachi Universal Replicator Components

Hitachi Universal Replicator Components

 Main control unit (MCU)


• The storage array at the primary site; contains P-VOLs and Master Journal Group

 Remote control unit (RCU)


• The storage array at the remote site; contains S-VOLs and Restore Journal Group

 Journal group
• Consists of data volumes and journal volumes
• Maintains volume consistency by operating on multiple data volumes with one command
• Master journal group in the MCU contains P-VOLs and master journal volumes
• Restore journal group in the RCU contains S-VOLs and restore journal volumes

 Journal volumes
• Stores differential data

Page 8-5
Hitachi Universal Replicator Operations with Replication Manager
Hitachi Universal Replicator Specifications

Hitachi Universal Replicator Specifications

 Primary volume (P-VOL) and secondary volume (S-VOL)


• For open systems, only OPEN-V emulation type is supported for HUR pair
volumes
• HUR requires a one-to-one relationship between the volumes of the pairs
P-VOL : S-VOL = 1 : 1
• Maximum number of pairs supported is 32,768 (one less with Command
Control Interface (CCI))
• If the HUR pairs include the Logical Unit Size Expansion (LUSE) pairs, the
maximum number of pairs decreases because a LUSE volume consists of
multiple logical devices (LDEVs)
• Supports external devices, LUSE volumes, Dynamic Provisioning volumes
(DP-VOLs)

 Journal group and journal volume


• A maximum of 256 journal groups per storage system
• A maximum of 8,192 data volumes in a journal group
• A maximum of 64 journal volumes in a journal group
• For open systems, only OPEN-V emulation type is supported for journal volumes
• Journal volumes cannot be a LUSE volume or a DP volume
• Journal volume must not have a path definition
• Each of the journal volumes can have different volume sizes and different RAID
configurations in a single journal group
• Journal volumes can be added to journal groups dynamically

Page 8-6
Hitachi Universal Replicator Operations with Replication Manager
Hitachi Universal Replicator Usage

Hitachi Universal Replicator Usage

 Journal volumes are used to store differential data

 Journal groups are used to maintain volume consistency

 Pull versus push:


• Upon request, the secondary storage array (RCU) pulls data from the primary storage array (MCU)
• With Hitachi TrueCopy Remote Replication bundle, the MCU pushes data to the RCU
• Three method invocations are used:
 Journal obtain
 Journal copy
 Journal restore

 When paired, asynchronous transfer automatically synchronizes secondary data volumes

Page 8-7
Hitachi Universal Replicator Operations with Replication Manager
Base Journal (Initial Copy)

Base Journal (Initial Copy)

Secondary host
Primary host Pointers to data volume are
stored in journal volume
Host IO Write sequence number assigned
inside metadata
Restore
Restore
P-VOL Journal
Master Volume S-VOL
Journal Initial Copy Process:
Obtaining base Volume Data is sent directly from the
Primary Subsystem P-VOL to the Restore Journal Secondary Subsystem
Volume

Base Journal — Initial Copy

Upon initiation or the paircreate, the primary site stores pointers to the data in the P-VOL
(primary data volume) as a base journal

• For the Base Journal, only metadata is stored in the journal volume

• The data in the P-VOL is not copied to the Master Journal Volume

The base journal data is obtained by the RCU repeatedly sending read commands to the MCU.
The data in the secondary data volume synchronizes with the data in the primary data volume
via pointers. This operation is the same as Initial Copy in TrueCopy Remote Replication. Initial
Copy is complete when the MCU informs the RCU that the highest sequence number has been
sent.

Page 8-8
Hitachi Universal Replicator Operations with Replication Manager
Update Journal (Update Copy)

Update Journal (Update Copy)

Update Copy Process:


Differential data is stored in
journal volume Secondary host
Primary host
Write Sequence Number assigned
within metadata
Write instruction

Obtaining updated
journal data MCU sends Update copy Restore

Master Read Journal sent from RCU Restore


P-VOL S-VOL
Journal Journal
Volume Volume
Journal Data Read by RCU
Primary Subsystem Secondary Subsystem

Journal copy is the function to copy the data in the primary journal volumes (M-JNL) in the MCU
to the secondary journal volumes (R-JNL) in the secondary site. The secondary storage system
issues the read journal command to the primary storage system to request the transfer of the
journal data that is stored in the primary journal volumes after the pair create or pair resync
operation from the MCU. The MCU transfers the journal data in the journal volumes to the RCU
if it has the journal data that has been not sent. If primary storage system does not have the
journal data, the information (indicating that MCU does not have such journal data) is sent. The
RCU stores the journal volume data that is transferred from the MCU in the secondary journal
volumes in RCU. The read journal commands are issued repeatedly and regularly from the RCU
to the MCU. After the data are restored, the highest journal sequence number is informed from
the RCU to the MCU when the read journal command is issued. According to this information,
the journal data is discarded in the MCU.

Page 8-9
Hitachi Universal Replicator Operations with Replication Manager
Journal Restore

Journal Restore

 Data is copied from the master


journal volume to the restore journal
volume Secondary host

 Data is sorted in the restore journal


volume by sequence number
Journal Restore
 Data can be received out of
sequence but is always restored to Restore
Journal S-VOL
secondary data volumes in Volume

sequence
Secondary Subsystem

Journal restore is the function of copying the data in the restore/secondary journal volume to
the S-VOL at the secondary site. The data in the restore/secondary journal volume is copied to
the secondary data volume according to the write sequence number. This ensures the write
sequence consistency between the primary and secondary data volumes. After the journal data
is restored to the secondary data volume, the journal data is discarded at the secondary site.

Hitachi Universal Replicator Configurations

 Allowable configurations

Allowed Not allowed Not allowed

Universal Storage Universal Storage Universal Storage Universal Storage Universal Universal
Platform Platform Platform MCU/RCU Platform Storage Storage
MCU/RCU MCU/RCU MCU/RCU Platform (MCU) Platform
(RCU)

P-VOL S-VOL P-VOL S-VOL P-VOL S-VOL

JNL Group JNL Group JNL Group


P-VOL S-VOL S-VOL P-VOL P-VOL

S-VOL

S-VOL P-VOL Universal


JNL Group Storage
Platform (RCU)

Page 8-10
Hitachi Universal Replicator Operations with Replication Manager
Three Data Center Configuration

Three Data Center Configuration

 TrueCopy synchronous and Universal Replicator can be combined into


a three data center (3DC) configuration

 This is a 3DC multi-target illustration


Fibre Channel link

Universal Replicator P-VOL


TrueCopy Sync P-VOL Universal Replicator S-VOL

TrueCopy Synchronous

P-VOL S-VOL S-VOL

Master Universal Replicator Restore


Journal Journal
Extender Extender

Three data center strategies combine in-region and out-of-region replication to provide the
strongest protection: fast recovery and data currency for local site failures, combined with good
protection from regional disasters. However, multiple data centers and data copies increase
costs, so robust 3DC strategies have typically been limited to large organizations with extremely
critical business continuity needs.

The above figure illustrates a 3DC multi-target configuration, in which data is replicated to two
remote sites in parallel. TrueCopy synchronous replication maintains a current copy of the
production data at an in-region recovery data center. At the same time, the Universal Storage
Platform at the primary site replicates the data to an out-of-region recovery site, using Universal
Replicator asynchronous replication across a separate replication network.

In case of production site failure, processing can resume at the in-region recovery site, using a
current TrueCopy replica of production data. The in-region hot site can also support planned
failover when needed for maintenance, upgrades, or business continuity testing. Meanwhile,
Universal Replicator provides ongoing replication to the out-of-region site, maintaining robust
business continuity protection. In case of a regional disaster, the out-of-region data center can
recover rapidly with a slightly older but fully consistent copy of production data.

Page 8-11
Hitachi Universal Replicator Operations with Replication Manager
Three Data Center Configuration

 This is a 3DC cascade illustration

Fibre Channel Link Extender Extender

Universal Replicator P-VOL Universal Replicator S-VOL

P-VOL P/S-VOL S-VOL


TrueCopy Remote
Replication
Synchronous Universal Replicator

Master Restore
Journal Journal

The above figure illustrates a 3DC cascade configuration that uses synchronous TrueCopy
Remote Replication to maintain a current copy of the production data at an in-region data
center. As noted earlier, 3DC cascade configurations make sense when the in-region hot site
provides processing capabilities for recovery.

The storage system at the in-region site also cascades the data to an out-of-region recovery
site, using Universal Replicator asynchronous replication. In comparison with other
asynchronous replication technologies, Universal Replicator does not require an additional point-
in-time copy of the data volume at the intermediate site. Universal Replicator stages the data to
the journal disk, which is relatively small compared with a complete data copy. This feature
saves physical disk space and reduces the cost of the 3DC configuration.

Page 8-12
Hitachi Universal Replicator Operations with Replication Manager
Hitachi Universal Replicator Operations

Hitachi Universal Replicator Operations

 Setting up remote paths

 Setting up journal groups

 Managing pairs

Page 8-13
Hitachi Universal Replicator Operations with Replication Manager
Setting Up Remote Paths

Setting Up Remote Paths

 Create remote path wizard – 1. Select Remote Storage System

Remote paths are port connections between local and remote storage systems. These logical routes
are used by remote copy pairs for copying data from a P-VOL to an S-VOL.

Replication Manager allows remote path configuration for different replication technologies. You
must set up a remote path before you can use any of the following volume replication functions:

• TrueCopy

o For enterprise-class storage systems: Based on the copy direction, you specify the
port for the local storage system CU (MCU) and the port for the remote storage
system CU (RCU)

o Initiator and RCU target are set automatically as the attributes of the specified ports

o You can specify either CU free (recommended) (to connect only from the local
storage system to a remote storage system via a dynamically assigned MCU-RCU
pair) or CU specific (to connect each path via a specified MCU and RCU)

• Universal Replicator

o Using CU free, you can specify the port for the local storage system and the port for
the remote storage system

o You must set paths for both directions

o Initiator and RCU target are set automatically as the attributes of the specified ports

Page 8-14
Hitachi Universal Replicator Operations with Replication Manager
Setting Up Remote Paths

 Defining remote paths


• Reverse path

The reverse link configuration is mandatory for Universal Replicator.

 Confirming the settings

Page 8-15
Hitachi Universal Replicator Operations with Replication Manager
Setting Up Journal Groups

Setting Up Journal Groups

Journal groups are used to keep the journal data for asynchronous data transfer and must be
set up before creating Universal Replicator volume pairs. Journal groups must be set in each
storage system on both the primary and secondary site.

Universal Replicator uses journal volumes as volume copy buffers. You must set up journal
groups before creating Universal Replicator volume pairs. Journal groups are used to keep the
journal data for asynchronous data transfer. Journal groups must be set in each storage system
on both the primary and secondary site. The journal volume for the primary site and the
primary volume, and the journal volume for the secondary site and the secondary volume, are
defined as journal groups.

Page 8-16
Hitachi Universal Replicator Operations with Replication Manager
Setting Up Journal Groups

 Selecting journal volumes

Page 8-17
Hitachi Universal Replicator Operations with Replication Manager
Setting Up Journal Groups

 Setting journal options

Inflow control: Allows you to specify whether to restrict inflow of update I/Os to the journal
volume (in other words, whether to delay response to the hosts)

• Yes indicates inflow will be restricted

• No indicates inflow will not be restricted

Note: If Yes is selected and the metadata or the journal data is full, the update I/Os may stop
(Journal Groups suspended).

Data overflow watch: Allows you to specify the time (in seconds) for monitoring whether
metadata and journal data are full; this value must be within the range of 0 to 600 seconds

Note: If Inflow Control is No, Data Overflow Watch does not take effect and does not
display anything.

Path Watch Time: Allows you to specify the interval from when a path gets blocked to when a
mirror gets split (suspended); This value must be within the range of 1 to 60 minutes

Note: Make sure that the same interval is set to both the master and restore journal groups in
the same mirror, unless otherwise required. If the interval differs between the master
and restore journal groups, these journal groups will not be suspended simultaneously.
For example, if the interval for the master journal group is 5 minutes and the interval for
the restore journal group is 60 minutes, the master journal group will be suspended in 5
minutes after a path gets blocked, and the restore journal group will be suspended in 60
minutes after a path gets blocked.

Page 8-18
Hitachi Universal Replicator Operations with Replication Manager
Setting Up Journal Groups

Caution: By default, the factory enables (turns ON) SVP mode 449, disabling the path watch
time option. If you’d like to enable the path watch time option, please disable mode
449 (turn it OFF).

Note: If you want to split a mirror (suspend) immediately after a path becomes blocked, please
disable SVP modes 448 and 449 (turn OFF).

Forward path watch time: Allows you to specify whether to forward the Path Watch Time
value of the master journal group to the restore journal group. If the Path Watch Time value is
forwarded, the two journal groups will have the same Path Watch Time value.

• Yes: The Path Watch Time value will be forwarded to the restore journal group

• No: The Path Watch Time value will not be forwarded to the restore journal group; No
is the default

• Blank: The current setting of Forward Path Watch Time will remain unchanged

Caution: This option cannot be specified in the remote site.

Use of Cache: Allows you to specify whether to store journal data in the restore journal group
into the cache

• Use: Journal data will be stored into the cache

Note: When there is insufficient space in the cache, journal data will also be stored into
the journal volume

• Not Use: Journal data will not be stored into the cache

• Blank: The current setting of Use of Cache will remain unchanged

Caution: This setting does not take effect on master journal groups. However, if the
horctakeover option is used to change a master journal group into a restore
journal, this setting will take effect on the journal group.

Speed of Line: Allows you to specify the line speed of data transfer; The unit is Mb/sec
(megabits per second)

• – You can specify one of the following: 256, 100, or 10

Caution: This setting does not take effect on master journal groups. However, if the
horctakeover option is used to change a master journal group into a restore journal
group, this setting will take effect on the journal group.

Delta resync Failure: Allows you to specify the processing that would take place when delta
resync operation cannot be performed

Page 8-19
Hitachi Universal Replicator Operations with Replication Manager
Setting Up Journal Groups

• Entire: All the data in primary data volume will be copied to remote data volume when
delta resync operation cannot be performed; The default is Entire

• None: No processing will take place when delta resync operation cannot be performed

o Therefore, the remote data volume will not be updated

o If Delta Resync pairs are desired, they will have to created manually

Caution: This option cannot be specified in the remote site.

Page 8-20
Hitachi Universal Replicator Operations with Replication Manager
Setting Up Journal Groups

 Confirming settings

 Viewing journal group status

Page 8-21
Hitachi Universal Replicator Operations with Replication Manager
Managing Pairs

Managing Pairs

 Setting up copy type as UR


• Select P-VOL or S-VOL

UR = Hitachi Universal Replicator

 Specifying additional settings

Page 8-22
Hitachi Universal Replicator Operations with Replication Manager
Managing Pairs Continued

Managing Pairs Continued

 Viewing pair status

Managing Pairs

 Viewing advanced options on pair operations

Page 8-23
Hitachi Universal Replicator Operations with Replication Manager
Instructor Demonstration

Instructor Demonstration

 Hitachi Universal Replicator


• Setting up ports
• Setting up journals
• Managing pairs
Instructor
Demonstration

Page 8-24
Hitachi Universal Replicator Operations with Replication Manager
Module Summary

Module Summary

 In this module, you should have learned to:


• Describe the features of the Hitachi Universal Replicator
• List the benefits
• List the specifications
• Describe the supported configurations
• Perform Hitachi Universal Replicator operations with Hitachi Replication
Manager

Page 8-25
Hitachi Universal Replicator Operations with Replication Manager
Module Review

Module Review

1. List the components of a Hitachi Universal Replicator configuration.


2. What is the minimum number of links used for a redundant
configuration?
3. What is the advantage of using journal volumes in the replication
process?
4. List the steps for setting up Universal Replicator copy pair
operations.

Page 8-26
9. Hitachi Replication Manager Monitoring
Operations
Module Objectives

 Upon completion of this module, you should be able to:


• List the pair monitoring options available in Hitachi Replication Manager
(HRpM)
• Describe the process for setting up alerts
• Manage alerts

Page 9-1
Hitachi Replication Manager Monitoring Operations
Monitoring Copy Operations

Monitoring Copy Operations

 Operations monitored by Replication Manager


• Configuration information
• Pair status
• Performance of remote copies
• Status of application replicas
• Resource utilization

 Alerts can be generated when a monitored target, such as a copy pair or buffer,
satisfies a preset condition

 You can specify a maximum of 1,000 conditions

Alert notifications are useful for enabling a quick response to a hardware failure or for
determining the cause of a degradation in transfer performance. They are also useful for
preventing errors due to buffer overflow and insufficient copy licenses, thereby facilitating the
continuity of normal operation. Because you can receive alerts by email or SNMP traps, you can
also monitor the replication environment while you are logged out of Replication Manager.

Page 9-2
Hitachi Replication Manager Monitoring Operations
Monitoring Copy Operations

 Monitoring pair configuration information


• For specific volume
• For specific copy group
• Configuration files
• Configuration files definition

You can monitor copy pair configurations in multiple ways using Replication Manager. You can
use a tree view to check the configuration definition file for CCI that is created by Replication
Manager or other products, or to check the copy group definition file for Business Continuity
Manager or Mainframe Agent. You can limit the range of copy pairs being monitored to those of
a host or storage system, and also check the configuration of related copy pairs. You can also
check copy pair configurations from a copy group perspective.

 Monitoring pair status


• Using alerts
 Alerts are generated upon a change in status
• Using My Copy Groups
 To focus on specific copy groups when monitoring copy groups
• Using refresh settings
 Click Refresh in the Application area

You can configure pair status monitoring for hosts, storage systems, copy groups or copy pairs
to detect an unexpected pair status. When a pair status for which you require notification is
detected, Replication Manager can be configured to alert you with an email message or an
SNMP trap. Replication Manager also detects pair statuses based on the periodic monitoring of
pair statuses.

Page 9-3
Hitachi Replication Manager Monitoring Operations
Monitoring Copy Operations

 Monitoring performance of remote copies


• Viewing copy progress
 My Copy Group or Pair Configurations
• Checking buffer usage (side files and journal volumes)
 Copy Group Summary
• Checking write delay time (C/T delta) for each copy group
 My Group, Copy Group Summary

You can configure threshold monitoring for asynchronous remote copy metrics to detect an
unexpected overflow of preset thresholds. You can display the transfer delay state between the
primary and secondary volumes for each copy group. This feature of Replication Manager is
used to monitor asynchronous remote copying by using Hitachi TrueCopy Extended Distance,
and Hitachi Universal Replicator. The transfer delay state of remote copies displays these types
of information:
• Usage of side file/journal
• Write delay time (C/T delta)
• Usage rate of pool capacity

 Monitoring status of application replicas


• Data protection status
• Microsoft Exchange and MS SQL Server

Monitoring the Status of Application Replicas

• You can monitor the progress of replica creation using the summary displayed in the
Applications and Servers subwindows
• You can receive notification through email or SNMP traps on replica monitoring
parameters

Page 9-4
Hitachi Replication Manager Monitoring Operations
Monitoring Copy Operations

 Monitoring resource utilization


• Usage ratio of pools and journal groups
• Alert functionality to send notifications to avoid overflow and maintain data
consistency

You can monitor the usage ratio of buffers (pools and journal groups) and receive alert
notification. You can get notification by way of email or SNMP traps based on the predefined
thresholds. If you are an administrator, you can add volumes to the buffers using Replication
Manager.

 Monitoring copy license use


• View and monitor the used capacity and copy license consumption
percentage for each copy product to prevent license expiry
• Configure alerts to get notifications when copy license consumption reaches
a particular threshold or exceeds the licensed capacity

You can monitor the used capacity and copy license usage percentage for each copy product in
complex replication environments. You can configure alerts to send notifications when copy
license usage reaches a particular threshold or the licensed capacity has been reached.

Page 9-5
Hitachi Replication Manager Monitoring Operations
Setting Up Alerts

Setting Up Alerts

 Create alerts – Host > Copy Groups

 Create alerts – Host > (specific volume)

Page 9-6
Hitachi Replication Manager Monitoring Operations
Setting Up Alerts

 Create alerts – Storage System > Open > Copy Licenses

 Create alerts – Storage System > Open > Pools

Page 9-7
Hitachi Replication Manager Monitoring Operations
Create Alert Setting Wizard

Create Alert Setting Wizard

 Creating alerts – 1. Introduction

 Creating alerts – 2. Select monitoring type

Page 9-8
Hitachi Replication Manager Monitoring Operations
Create Alert Setting Wizard

 Creating alerts – 3. Alert setting

 Creating alerts – 4. Edit alert action

Page 9-9
Hitachi Replication Manager Monitoring Operations
Create Alert Setting Wizard

 Creating alerts – 5. Confirm and 6. Finish

Page 9-10
Hitachi Replication Manager Monitoring Operations
Alert Status

Alert Status

 Exporting alerts from alert list

 Editing, deleting, testing, disabling alerts

Page 9-11
Hitachi Replication Manager Monitoring Operations
Alert Status

 Editing alert setting

Page 9-12
Hitachi Replication Manager Monitoring Operations
Instructor Demonstration

Instructor Demonstration
 Monitoring options
• Dashboard
• Copy pair status
• License status
• Alerts
Instructor
Demonstration

Page 9-13
Hitachi Replication Manager Monitoring Operations
Module Summary

Module Summary

 In this module, you should have learned to:


• List the pair monitoring options available in Hitachi Replication Manager
(HRpM)
• Describe the process for setting up alerts
• Manage alerts

Module Review

1. How can you use alerts?


2. Copy Pair status can be monitored for what entities?

Page 9-14
10. Application Replicas
Module Objectives

 Upon completion of this module, you should be able to:


• Describe how Hitachi Replication Manager (HRpM) supports managing
application replicas
• Identify the components required to set up an environment for managing
application replicas
• Describe the Application Agent architecture and functions
• List the steps for handling backup and restores for:
 Microsoft® Exchange Server (MS-Exchange)
 Microsoft SQL Server® (MS-SQL)

Page 10-1
Application Replicas
Application Replicas

Application Replicas

As with copy pair management, the creation and management of application replicas is
organized around tasks and storage assets.

Backup and Restore Overview

Page 10-2
Application Replicas
Application Backup and Restore Features

Application Backup and Restore Features

 Simpler setup
• Simple deployment – Application agent
 Installer deploys the required components for replica management
 Can be easily downloaded from HCS GUI to application servers
• Simple agent setup: HRpM hides complex parameters which users normally
do not need to know about

HRpM Application Agent

 Consolidated management
• Multiple server management: HRpM allows user to manage multiple servers
in a single point of view
• Integration with pair management: Pair Management button easily navigates
users to required pair configurations

Shows the list of application


servers in a single consolidated Launch Pair Configuration Wizard
view from Applications View

Page 10-3
Application Replicas
Application Backup and Restore Features

 Enhanced monitoring
• Data protection status: Intuitive icon shows the summary status, so user can
easily identify the possible issues
• Email notification: Errors can be notified by email for immediate actions
Protection Status for Hosts and
Storage Groups / Instances

Normal : Latest Replica is available.


Warning : Only the old replica is available. Email notification settings for
Critical : No replica is available due to errors. Application Agent
Unknown : No replica task has been scheduled yet.

Page 10-4
Application Replicas
Components

Components

 Servers
Web Client HRpM Server
• Storage management server: Provides management interface
HDvM Server

• Application server
 Mailbox server of MS-Exchange
 Database server of MS-SQL Server
• Backup/import server: Server which mounts S-VOL Application Agent Application Agent
CCI CCI
 Software
• Application agent: Executes replica operations by communicating with Application

CCI and MS-SQL/MS-Exchange Server

• CCI (+ HDvM Agent)


 Executes pair operation (resync or split)
 HDvM agent is not mandatory for creating replica, but the agent is
required for pair configuration
Hitachi ShadowImage Heterogeneous Replication
• HRpM and HDvM server: manages backup and restore tasks or Hitachi Copy-on-Write Snapshot/HTI

HDvM = Hitachi Device Manager

HTI = Hitachi Thin Image

Application Agent

 Application agent architecture


Management Server
Application Server Storage System
HRpM Server
XML API CMD
HDvM Agent
HDvM Server
CCI
(Port) 24041 Application Agent P S

Communicate for Replica


Operations
Import Server
XML API
Application Agent

CCI
(Port) 24041
HDvM Agent

Pair Operations
(Pair Configuration Wizard, Change Pair Status Wizard)
Replica Operations and Agent Settings
(Create Replica Wizard, Restore Replica Wizard, Setup Agent Dialog)

Page 10-5
Application Replicas
System Configuration for Remote Copy

System Configuration for Remote Copy

 HRpM application agent also supports creating a replica to the remote site
• Remote copy support:
 Hitachi TrueCopy synchronous — MS-SQL Server/MS-Exchange
 Hitachi Universal Replicator — MS-SQL Server
• Import server is required on remote site (MS-Exchange Only)
Web Client HRpM Svr
HDvM Svr HDvM Svr

Application Agent
Application Agent
CCI
CCI

Application Micro code


Micro code

TrueCopy
Sync

HDvM server on remote site is not mandatory for replica operation, though it is required for performing pair configuration on remote site.

Backup and Restore Operations

 Features:
• Discovering application agent
• Creating replica
• Restoring replica
• Mounting replica

Page 10-6
Application Replicas
Discovering Application Agent

Discovering Application Agent

 Discovery and configuration of an agent through the intuitive GUI


• New application agent in Information Source view provides the screens for
the settings

Page 10-7
Application Replicas
Creating Replicas

Creating Replicas

Page 10-8
Application Replicas
Create Replica Wizard

Create Replica Wizard

 The result of the Create Replica Wizard is registered as task and Task View
provides the detail of the task execution

Task History allows


to view the history of
execution results
Edit Task allows to update
the execution schedule.

Restoring Replicas

 The application database can be restored using the replica


• Easy to identify the available backups on the GUI screen
• Quick recovery of Exchange by using the copy pair reverse-resync operation

Replica History tab shows the list of created replicas. 2009/10/15 02:00

2009/10/16 2:00

2009/10/17 02:00

Copy Information from S-VOL to P-VOL

Page 10-9
Application Replicas
Restoring Replica

Restoring Replica

Mounting or Unmounting Replica

Page 10-10
Application Replicas
Module Summary

Module Summary

 In this module, you should have learned to:


• Describe how Hitachi Replication Manager (HRpM) supports managing
application replicas
• Identify the components required for setting up an environment for managing
application replicas
• Describe the Application Agent architecture and functions
• List the steps for handling backup and restores for:
 Microsoft Exchange Server (MS Exchange)
 Microsoft SQL Server (MS SQL)

Module Review

1. Where can you download the application agents?

2. What applications are supported for application replicas?

3. What functions are supported for application replica management?

Page 10-11
Application Replicas
Your Next Steps

Your Next Steps

Validate your knowledge and skills with certification.


Follow us on social media:

@HDSAcademy
Check your progress in the Learning Path.

Review the course description for supplemental courses, or


register, enroll, and view additional course offerings.

Get practical advice and insight with HDS white papers.

Ask the Academy a question or give us feedback on this course


(employees only).

Join the conversation with your peers in the HDS Community.

Certification: http://www.hds.com/services/education/certification

Learning Paths:

• Customer Learning Path (North America, Latin America, and APAC):


http://www.hds.com/assets/pdf/hitachi-data-systems-academy-customer-learning-paths.pdf

• Customer Learning Path (EMEA): http://www.hds.com/assets/pdf/hitachi-data-systems-academy-


customer-training.pdf

• All Partners Learning Paths:


https://portal.hds.com/index.php?option=com_hdspartner&task=displayWebPage&menuName=P
X_PT_PARTNER_EDUCATION&WT.ac=px_rm_ptedu
• Employee Learning Paths:
http://loop.hds.com/community/hds_academy
Learning Center: http://learningcenter.hds.com

White Papers: http://www.hds.com/corporate/resources/


For Partners and Employees – theLoop:
http://loop.hds.com/community/hds_academy/course_announcements_and_feedback_community
For Customers, Partners, Employees – Hitachi Data Systems Community:
https://community.hds.com/welcome
For Customers, Partners, Employees – Hitachi Data Systems Academy link to Twitter:

http://www.twitter.com/HDSAcademy

Page 10-12
Communicating in a Virtual Classroom:
Tools and Features
Virtual Classroom Basics
This section covers the basic functions available when communicating in a virtual classroom.

Communicating in a Virtual Classroom

 Chat

 Q&A

 Feedback Options
• Raise Hand
• Yes/No
• Emoticons

 Markup Tools
• Drawing Tools
• Text Tool

Page V-1
Communicating in a Virtual Classroom: Tools and Features
Reminders: Intercall Call-Back Teleconference

Reminders: Intercall Call-Back Teleconference

Synchronizing Your Audio to the WebEx Session

Page V-2
Communicating in a Virtual Classroom: Tools and Features
Feedback Features — Try Them

Feedback Features — Try Them

Raise Hand Yes No Emoticons

Markup Tools (Drawing and Text) — Try Them

Pointer Text Writing Drawing Highlighter Annotation Eraser


Tool Tools Colors

Page V-3
Communicating in a Virtual Classroom: Tools and Features
Intercall (WebEx) Technical Support

Intercall (WebEx) Technical Support

Call 800.374.1852

Page V-4
Communicating in a Virtual Classroom: Tools and Features
WebEx Hands-On Lab Operations

WebEx Hands-On Lab Operations

 From session, Instructor starts Hands-On remote lab


 Instructor assigns lab teams (lab teams assigned to a computer)
 Learners are prompted to connect to their lab computer
• Click Yes

 After connecting to lab computer, learners see a message asking them to disconnect and
connect to the new teleconference
• Click Yes You do not need to hang
up and dial a new number.
Intercall auto connects you
to the lab conference.

 Instructor can join each lab team’s conference


 Members of a lab group can communicate:
• With each other using CHAT and telephone
Lower right hand corner of
computer screen

• With Instructor using Raise Hand feature


 Only 1 learner is in control of the lab desktop at any one time
• To pass control, select the learner name and click Presenter Ball

Page V-5
Communicating in a Virtual Classroom: Tools and Features
WebEx Hands-On Lab Operations

Page V-6
Training Course Glossary
A B C D E F G H I J K L M N O P Q R S T U V W X Y Z

—A— AIX — IBM UNIX.


AaaS — Archive as a Service. A cloud computing AL — Arbitrated Loop. A network in which nodes
business model. contend to send data and only 1 node at a
AAMux — Active-Active Multiplexer. time is able to send data.

ACC — Action Code. A SIM (System Information AL-PA — Arbitrated Loop Physical Address.
Message). AMS — Adaptable Modular Storage.
ACE — Access Control Entry. Stores access rights APAR — Authorized Program Analysis Reports.
for a single user or group within the APF — Authorized Program Facility. In IBM z/OS
Windows security model. and OS/390 environments, a facility that
ACL — Access Control List. Stores a set of ACEs permits the identification of programs that
so that it describes the complete set of access are authorized to use restricted functions.
rights for a file system object within the API — Application Programming Interface.
Microsoft Windows security model.
APID — Application Identification. An ID to
ACP ― Array Control Processor. Microprocessor identify a command device.
mounted on the disk adapter circuit board
(DKA) that controls the drives in a specific Application Management — The processes that
disk array. Considered part of the back end; manage the capacity and performance of
it controls data transfer between cache and applications.
the hard drives. ARB — Arbitration or request.
ACP Domain ― Also Array Domain. All of the ARM — Automated Restart Manager.
array-groups controlled by the same pair of Array Domain — Also ACP Domain. All
DKA boards, or the HDDs managed by 1 functions, paths and disk drives controlled
ACP PAIR (also called BED). by a single ACP pair. An array domain can
ACP PAIR ― Physical disk access control logic. contain a variety of LVI or LU
Each ACP consists of 2 DKA PCBs to configurations.
provide 8 loop paths to the real HDDs. Array Group — Also called a parity group. A
Actuator (arm) — Read/write heads are attached group of hard disk drives (HDDs) that form
to a single head actuator, or actuator arm, the basic unit of storage in a subsystem. All
that moves the heads around the platters. HDDs in a parity group must have the same
AD — Active Directory. physical capacity.

ADC — Accelerated Data Copy. Array Unit — A group of hard disk drives in 1
RAID structure. Same as parity group.
Address — A location of data, usually in main
memory or on a disk. A name or token that ASIC — Application specific integrated circuit.
identifies a network component. In local area ASSY — Assembly.
networks (LANs), for example, every node Asymmetric virtualization — See Out-of-Band
has a unique address. virtualization.
ADP — Adapter. Asynchronous — An I/O operation whose
ADS — Active Directory Service. initiator does not await its completion before

HDS Confidential: For distribution only to authorized parties. Page G-1


proceeding with other work. Asynchronous this term are subject to proprietary
I/O operations enable an initiator to have trademark disputes in multiple countries at
multiple concurrent I/O operations in the present time.
progress. Also called Out-of-Band BIOS — Basic Input/Output System. A chip
virtualization. located on all computer motherboards that
ATA —Advanced Technology Attachment. A disk governs how a system boots and operates.
drive implementation that integrates the BLKSIZE — Block size.
controller on the disk drive itself. Also
known as IDE (Integrated Drive Electronics). BLOB — Binary large object.

ATR — Autonomic Technology Refresh. BP — Business processing.

Authentication — The process of identifying an BPaaS —Business Process as a Service. A cloud


individual, usually based on a username and computing business model.
password. BPAM — Basic Partitioned Access Method.
AUX — Auxiliary Storage Manager. BPM — Business Process Management.
Availability — Consistent direct access to BPO — Business Process Outsourcing. Dynamic
information over time. BPO services refer to the management of
-back to top- partly standardized business processes,
including human resources delivered in a
—B— pay-per-use billing relationship or a self-
B4 — A group of 4 HDU boxes that are used to service consumption model.
contain 128 HDDs. BST — Binary Search Tree.
BA — Business analyst. BSTP — Blade Server Test Program.
Back end — In client/server applications, the BTU — British Thermal Unit.
client part of the program is often called the
Business Continuity Plan — Describes how an
front end and the server part is called the
organization will resume partially or
back end.
completely interrupted critical functions
Backup image—Data saved during an archive within a predetermined time after a
operation. It includes all the associated files, disruption or a disaster. Sometimes also
directories, and catalog information of the called a Disaster Recovery Plan.
backup operation.
-back to top-
BASM — Basic Sequential Access Method.
BATCTR — Battery Control PCB.
—C—
CA — (1) Continuous Access software (see
BC — (1) Business Class (in contrast with EC,
HORC), (2) Continuous Availability or (3)
Enterprise Class). (2) Business Coordinator.
Computer Associates.
BCP — Base Control Program.
Cache — Cache Memory. Intermediate buffer
BCPii — Base Control Program internal interface. between the channels and drives. It is
BDAM — Basic Direct Access Method. generally available and controlled as 2 areas
BDW — Block Descriptor Word. of cache (cache A and cache B). It may be
battery-backed.
BED — Back end director. Controls the paths to
the HDDs. Cache hit rate — When data is found in the cache,
it is called a cache hit, and the effectiveness
Big Data — Refers to data that becomes so large in of a cache is judged by its hit rate.
size or quantity that a dataset becomes
awkward to work with using traditional Cache partitioning — Storage management
database management systems. Big data software that allows the virtual partitioning
entails data capacity or measurement that of cache and allocation of it to different
requires terms such as Terabyte (TB), applications.
Petabyte (PB), Exabyte (EB), Zettabyte (ZB) CAD — Computer-Aided Design.
or Yottabyte (YB). Note that variations of

Page G-2 HDS Confidential: For distribution only to authorized parties.


CAGR — Compound Annual Growth Rate. CDWP — Cumulative disk write throughput.
Capacity — Capacity is the amount of data that a CE — Customer Engineer.
storage system or drive can store after CEC — Central Electronics Complex.
configuration and/or formatting.
CentOS — Community Enterprise Operating
Most data storage companies, including HDS, System.
calculate capacity based on the premise that
1KB = 1,024 bytes, 1MB = 1,024 kilobytes, Centralized Management — Storage data
1GB = 1,024 megabytes, and 1TB = 1,024 management, capacity management, access
gigabytes. See also Terabyte (TB), Petabyte security management, and path
(PB), Exabyte (EB), Zettabyte (ZB) and management functions accomplished by
Yottabyte (YB). software.

CAPEX — Capital expenditure — the cost of CF — Coupling Facility.


developing or providing non-consumable CFCC — Coupling Facility Control Code.
parts for the product or system. For example, CFW — Cache Fast Write.
the purchase of a photocopier is the CAPEX,
and the annual paper and toner cost is the CH — Channel.
OPEX. (See OPEX). CH S — Channel SCSI.
CAS — (1) Column Address Strobe. A signal sent CHA — Channel Adapter. Provides the channel
to a dynamic random access memory interface control functions and internal cache
(DRAM) that tells it that an associated data transfer functions. It is used to convert
address is a column address. CAS-column the data format between CKD and FBA. The
address strobe sent by the processor to a CHA contains an internal processor and 128
DRAM circuit to activate a column address. bytes of edit buffer memory. Replaced by
(2) Content-addressable Storage. CHB in some cases.
CBI — Cloud-based Integration. Provisioning of a CHA/DKA — Channel Adapter/Disk Adapter.
standardized middleware platform in the CHAP — Challenge-Handshake Authentication
cloud that can be used for various cloud Protocol.
integration scenarios.
CHB — Channel Board. Updated DKA for Hitachi
An example would be the integration of Unified Storage VM and additional
legacy applications into the cloud or enterprise components.
integration of different cloud-based
Chargeback — A cloud computing term that refers
applications into one application.
to the ability to report on capacity and
CBU — Capacity Backup. utilization by application or dataset,
CBX —Controller chassis (box). charging business users or departments
based on how much they use.
CC – Common Criteria. In regards to Information
Technology Security Evaluation, it is a CHF — Channel Fibre.
flexible, cloud related certification CHIP — Client-Host Interface Processor.
framework that enables users to specify Microprocessors on the CHA boards that
security functional and assurance process the channel commands from the
requirements. hosts and manage host access to cache.
CCHH — Common designation for Cylinder and CHK — Check.
Head. CHN — Channel adapter NAS.
CCI — Command Control Interface. CHP — Channel Processor or Channel Path.
CCIF — Cloud Computing Interoperability CHPID — Channel Path Identifier.
Forum. A standards organization active in CHSN or C-HSN— Cache Memory Hierarchical
cloud computing. Star Network.
CDP — Continuous Data Protection. CHT — Channel tachyon. A Fibre Channel
CDR — Clinical Data Repository. protocol controller.
CICS — Customer Information Control System.

HDS Confidential: For distribution only to authorized parties. Page G-3


CIFS protocol — Common internet file system is a • Private cloud (or private network cloud)
platform-independent file sharing system. A • Public cloud (or public network cloud)
network file system accesses protocol
• Virtual private cloud (or virtual private
primarily used by Windows clients to
network cloud)
communicate file access requests to
Windows servers. Cloud Enabler —a concept, product or solution
that enables the deployment of cloud
CIM — Common Information Model.
computing. Key cloud enablers include:
CIS — Clinical Information System.
• Data discoverability
CKD ― Count-key Data. A format for encoding
• Data mobility
data on hard disk drives; typically used in
the mainframe environment. • Data protection
CKPT — Check Point. • Dynamic provisioning
CL — See Cluster. • Location independence

CLA – See Cloud Security Alliance. • Multitenancy to ensure secure privacy

CLI — Command Line Interface. • Virtualization

CLPR — Cache Logical Partition. Cache can be Cloud Fundamental —A core requirement to the
deployment of cloud computing. Cloud
divided into multiple virtual cache
memories to lessen I/O contention. fundamentals include:

Cloud Computing — “Cloud computing refers to • Self service


applications and services that run on a • Pay per use
distributed network using virtualized • Dynamic scale up and scale down
resources and accessed by common Internet
protocols and networking standards. It is Cloud Security Alliance — A standards
distinguished by the notion that resources are organization active in cloud computing.
virtual and limitless, and that details of the Cloud Security Alliance GRC Stack — The Cloud
physical systems on which software runs are Security Alliance GRC Stack provides a
abstracted from the user.” — Source: Cloud toolkit for enterprises, cloud providers,
Computing Bible, Barrie Sosinsky (2011). security solution providers, IT auditors and
Cloud computing often entails an “as a other key stakeholders to instrument and
service” business model that may entail one assess both private and public clouds against
or more of the following: industry established best practices,
standards and critical compliance
• Archive as a Service (AaaS) requirements.
• Business Process as a Service (BPaas)
CLPR — Cache Logical Partition.
• Failure as a Service (FaaS)
Cluster — A collection of computers that are
• Infrastructure as a Service (IaaS) interconnected (typically at high-speeds) for
• IT as a Service (ITaaS) the purpose of improving reliability,
• Platform as a Service (PaaS) availability, serviceability or performance
(via load balancing). Often, clustered
• Private File Tiering as a Service (PFTaaS) computers have access to a common pool of
• Software as a Service (SaaS) storage and run special software to
• SharePoint as a Service (SPaaS) coordinate the component computers'
activities.
• SPI refers to the Software, Platform and
Infrastructure as a Service business model. CM ― (1) Cache Memory, Cache Memory Module.
Cloud network types include the following: Intermediate buffer between the channels
and drives. It has a maximum of 64GB (32GB
• Community cloud (or community x 2 areas) of capacity. It is available and
network cloud) controlled as 2 areas of cache (cache A and
• Hybrid cloud (or hybrid network cloud)

Page G-4 HDS Confidential: For distribution only to authorized parties.


cache B). It is fully battery-backed (48 hours). Corporate governance — Organizational
(2) Content Management. compliance with government-mandated
CM DIR — Cache Memory Directory. regulations.

CME — Communications Media and CP — Central Processor (also called Processing


Unit or PU).
Entertainment.
CPC — Central Processor Complex.
CM-HSN — Control Memory Hierarchical Star
Network. CPM — Cache Partition Manager. Allows for
partitioning of the cache and assigns a
CM PATH ― Cache Memory Access Path. Access
partition to a LU; this enables tuning of the
Path from the processors of CHA, DKA PCB
system’s performance.
to Cache Memory.
CPOE — Computerized Physician Order Entry
CM PK — Cache Memory Package. (Provider Ordered Entry).
CM/SM — Cache Memory/Shared Memory. CPS — Cache Port Slave.
CMA — Cache Memory Adapter. CPU — Central Processing Unit.
CMD — Command. CRM — Customer Relationship Management.
CMG — Cache Memory Group. CSA – Cloud Security Alliance.
CNAME — Canonical NAME. CSS — Channel Subsystem.
CNS — Cluster Name Space or Clustered Name CS&S — Customer Service and Support.
Space. CSTOR — Central Storage or Processor Main
CNT — Cumulative network throughput. Memory.
CoD — Capacity on Demand. C-Suite — The C-suite is considered the most
important and influential group of
Community Network Cloud — Infrastructure individuals at a company. Referred to as
shared between several organizations or “the C-Suite within a Healthcare provider.”
groups with common concerns.
CSV — Comma Separated Value or Cluster Shared
Concatenation — A logical joining of 2 series of Volume.
data, usually represented by the symbol “|”.
In data communications, 2 or more data are CSVP — Customer-specific Value Proposition.
often concatenated to provide a unique CSW ― Cache Switch PCB. The cache switch
name or reference (such as, S_ID | X_ID). connects the channel adapter or disk adapter
Volume managers concatenate disk address to the cache. Each of them is connected to the
spaces to present a single larger address cache by the Cache Memory Hierarchical
space. Star Net (C-HSN) method. Each cluster is
Connectivity technology — A program or device's provided with the 2 CSWs, and each CSW
ability to link with other programs and can connect 4 caches. The CSW switches any
devices. Connectivity technology allows of the cache paths to which the channel
programs on a given computer to run adapter or disk adapter is to be connected
routines or access objects on another remote through arbitration.
computer. CTG — Consistency Group.
Controller — A device that controls the transfer of CTL — Controller module.
data from a computer to a peripheral device
CTN — Coordinated Timing Network.
(including a storage system) and vice versa.
CU — Control Unit. Refers to a storage subsystem.
Controller-based virtualization — Driven by the
The hexadecimal number to which 256
physical controller at the hardware
microcode level versus at the application LDEVs may be assigned.
software layer and integrates into the CUDG — Control Unit Diagnostics. Internal
infrastructure to allow virtualization across system tests.
heterogeneous storage and third party CUoD — Capacity Upgrade on Demand.
products.
CV — Custom Volume.

HDS Confidential: For distribution only to authorized parties. Page G-5


CVS ― Customizable Volume Size. Software used context, data migration is the same as
to create custom volume sizes. Marketed Hierarchical Storage Management (HSM).
under the name Virtual LVI (VLVI) and Data Pipe or Data Stream — The connection set up
Virtual LUN (VLUN). between the MediaAgent, source or
CWDM — Course Wavelength Division destination server is called a Data Pipe or
Multiplexing. more commonly a Data Stream.
CXRC — Coupled z/OS Global Mirror. Data Pool — A volume containing differential
-back to top- data only.
—D— Data Protection Directive — A major compliance
and privacy protection initiative within the
DA — Device Adapter.
European Union (EU) that applies to cloud
DACL — Discretionary access control list (ACL). computing. Includes the Safe Harbor
The part of a security descriptor that stores Agreement.
access rights for users and groups.
Data Stream — CommVault’s patented high
DAD — Device Address Domain. Indicates a site performance data mover used to move data
of the same device number automation back and forth between a data source and a
support function. If several hosts on the MediaAgent or between 2 MediaAgents.
same site have the same device number
Data Striping — Disk array data mapping
system, they have the same name.
technique in which fixed-length sequences of
DAP — Data Access Path. Also known as Zero virtual disk data addresses are mapped to
Copy Failover (ZCF). sequences of member disk addresses in a
DAS — Direct Attached Storage. regular rotating pattern.
DASD — Direct Access Storage Device. Data Transfer Rate (DTR) — The speed at which
data can be transferred. Measured in
Data block — A fixed-size unit of data that is
kilobytes per second for a CD-ROM drive, in
transferred together. For example, the
bits per second for a modem, and in
X-modem protocol transfers blocks of 128
megabytes per second for a hard drive. Also,
bytes. In general, the larger the block size,
often called data rate.
the faster the data transfer rate.
DBL — Drive box.
Data Duplication — Software duplicates data, as
in remote copy or PiT snapshots. Maintains 2 DBMS — Data Base Management System.
copies of data. DBX — Drive box.
Data Integrity — Assurance that information will DCA ― Data Cache Adapter.
be protected from modification and
DCTL — Direct coupled transistor logic.
corruption.
DDL — Database Definition Language.
Data Lifecycle Management — An approach to
information and storage management. The DDM — Disk Drive Module.
policies, processes, practices, services and DDNS — Dynamic DNS.
tools used to align the business value of data DDR3 — Double data rate 3.
with the most appropriate and cost-effective
storage infrastructure from the time data is DE — Data Exchange Software.
created through its final disposition. Data is Device Management — Processes that configure
aligned with business requirements through and manage storage systems.
management policies and service levels DFS — Microsoft Distributed File System.
associated with performance, availability,
recoverability, cost, and what ever DFSMS — Data Facility Storage Management
parameters the organization defines as Subsystem.
critical to its operations. DFSM SDM — Data Facility Storage Management
Data Migration — The process of moving data Subsystem System Data Mover.
from 1 storage device to another. In this

Page G-6 HDS Confidential: For distribution only to authorized parties.


DFSMSdfp — Data Facility Storage Management 8 LUs; a large one, with hundreds of disk
Subsystem Data Facility Product. drives, can support thousands.
DFSMSdss — Data Facility Storage Management DKA ― Disk Adapter. Also called an array control
Subsystem Data Set Services. processor (ACP). It provides the control
DFSMShsm — Data Facility Storage Management functions for data transfer between drives
Subsystem Hierarchical Storage Manager. and cache. The DKA contains DRR (Data
Recover and Reconstruct), a parity generator
DFSMSrmm — Data Facility Storage Management circuit. Replaced by DKB in some cases.
Subsystem Removable Media Manager.
DKB — Disk Board. Updated DKA for Hitachi
DFSMStvs — Data Facility Storage Management Unified Storage VM and additional
Subsystem Transactional VSAM Services. enterprise components.
DFW — DASD Fast Write. DKC ― Disk Controller Unit. In a multi-frame
DICOM — Digital Imaging and Communications configuration, the frame that contains the
in Medicine. front end (control and memory
DIMM — Dual In-line Memory Module. components).
Direct Access Storage Device (DASD) — A type of DKCMN ― Disk Controller Monitor. Monitors
storage device, in which bits of data are temperature and power status throughout
stored at precise locations, enabling the the machine.
computer to retrieve information directly DKF ― Fibre disk adapter. Another term for a
without having to scan a series of records. DKA.
Direct Attached Storage (DAS) — Storage that is DKU — Disk Array Frame or Disk Unit. In a
directly attached to the application or file multi-frame configuration, a frame that
server. No other device on the network can contains hard disk units (HDUs).
access the stored data. DKUPS — Disk Unit Power Supply.
Director class switches — Larger switches often DLIBs — Distribution Libraries.
used as the core of large switched fabrics.
DKUP — Disk Unit Power Supply.
Disaster Recovery Plan (DRP) — A plan that
describes how an organization will deal with DLM — Data Lifecycle Management.
potential disasters. It may include the DMA — Direct Memory Access.
precautions taken to either maintain or DM-LU — Differential Management Logical Unit.
quickly resume mission-critical functions. DM-LU is used for saving management
Sometimes also referred to as a Business information of the copy functions in the
Continuity Plan. cache.
Disk Administrator — An administrative tool that DMP — Disk Master Program.
displays the actual LU storage configuration.
DMT — Dynamic Mapping Table.
Disk Array — A linked group of 1 or more
physical independent hard disk drives DMTF — Distributed Management Task Force. A
generally used to replace larger, single disk standards organization active in cloud
drive systems. The most common disk computing.
arrays are in daisy chain configuration or DNS — Domain Name System.
implement RAID (Redundant Array of DOC — Deal Operations Center.
Independent Disks) technology.
A disk array may contain several disk drive Domain — A number of related storage array
trays, and is structured to improve speed groups.
and increase protection against loss of data. DOO — Degraded Operations Objective.
Disk arrays organize their data storage into DP — Dynamic Provisioning (pool).
Logical Units (LUs), which appear as linear
DP-VOL — Dynamic Provisioning Virtual Volume.
block paces to their clients. A small disk
array, with a few disks, might support up to DPL — (1) (Dynamic) Data Protection Level or (2)
Denied Persons List.

HDS Confidential: For distribution only to authorized parties. Page G-7


DR — Disaster Recovery. EHR — Electronic Health Record.
DRAC — Dell Remote Access Controller. EIG — Enterprise Information Governance.
DRAM — Dynamic random access memory. EMIF — ESCON Multiple Image Facility.
DRP — Disaster Recovery Plan. EMPI — Electronic Master Patient Identifier. Also
DRR — Data Recover and Reconstruct. Data Parity known as MPI.
Generator chip on DKA. Emulation — In the context of Hitachi Data
DRV — Dynamic Reallocation Volume. Systems enterprise storage, emulation is the
logical partitioning of an Array Group into
DSB — Dynamic Super Block. logical devices.
DSF — Device Support Facility. EMR — Electronic Medical Record.
DSF INIT — Device Support Facility Initialization
ENC — Enclosure or Enclosure Controller. The
(for DASD).
units that connect the controllers with the
DSP — Disk Slave Program. Fibre Channel disks. They also allow for
DT — Disaster tolerance. online extending a system by adding RKAs.
DTA —Data adapter and path to cache-switches. ENISA – European Network and Information
Security Agency.
DTR — Data Transfer Rate.
EOF — End of Field.
DVE — Dynamic Volume Expansion.
EOL — End of Life.
DW — Duplex Write.
EPO — Emergency Power Off.
DWDM — Dense Wavelength Division
Multiplexing. EREP — Error Reporting and Printing.

DWL — Duplex Write Line or Dynamic ERP — Enterprise Resource Planning.


Workspace Linking. ESA — Enterprise Systems Architecture.
-back to top- ESB — Enterprise Service Bus.

—E— ESC — Error Source Code.


ESD — Enterprise Systems Division (of Hitachi).
EAL — Evaluation Assurance Level (EAL1
through EAL7). The EAL of an IT product or ESCD — ESCON Director.
system is a numerical security grade ESCON ― Enterprise Systems Connection. An
assigned following the completion of a input/output (I/O) interface for mainframe
Common Criteria security evaluation, an computer connections to storage devices
international standard in effect since 1999. developed by IBM.
EAV — Extended Address Volume. ESD — Enterprise Systems Division.
EB — Exabyte. ESDS — Entry Sequence Data Set.
EC — Enterprise Class (in contrast with BC, ESS — Enterprise Storage Server.
Business Class). ESW — Express Switch or E Switch. Also referred
ECC — Error Checking and Correction. to as the Grid Switch (GSW).
ECC.DDR SDRAM — Error Correction Code Ethernet — A local area network (LAN)
Double Data Rate Synchronous Dynamic architecture that supports clients and servers
RAM Memory. and uses twisted pair cables for connectivity.
ECM — Extended Control Memory. ETR — External Time Reference (device).
ECN — Engineering Change Notice. EVS — Enterprise Virtual Server.
E-COPY — Serverless or LAN free backup. Exabyte (EB) — A measurement of data or data
storage. 1EB = 1,024PB.
EFI — Extensible Firmware Interface. EFI is a
specification that defines a software interface EXCP — Execute Channel Program.
between an operating system and platform ExSA — Extended Serial Adapter.
firmware. EFI runs on top of BIOS when a -back to top-
LPAR is activated.

Page G-8 HDS Confidential: For distribution only to authorized parties.


—F— achieved by including redundant instances
of components whose failure would make
FaaS — Failure as a Service. A proposed business the system inoperable, coupled with facilities
model for cloud computing in which large- that allow the redundant components to
scale, online failure drills are provided as a assume the function of failed ones.
service in order to test real cloud
deployments. Concept developed by the FAIS — Fabric Application Interface Standard.
College of Engineering at the University of FAL — File Access Library.
California, Berkeley in 2011. FAT — File Allocation Table.
Fabric — The hardware that connects Fault Tolerant — Describes a computer system or
workstations and servers to storage devices component designed so that, in the event of a
in a SAN is referred to as a "fabric." The SAN component failure, a backup component or
fabric enables any-server-to-any-storage procedure can immediately take its place with
device connectivity through the use of Fibre no loss of service. Fault tolerance can be
Channel switching technology. provided with software, embedded in
Failback — The restoration of a failed system hardware or provided by hybrid combination.
share of a load to a replacement component. FBA — Fixed-block Architecture. Physical disk
For example, when a failed controller in a sector mapping.
redundant configuration is replaced, the FBA/CKD Conversion — The process of
devices that were originally controlled by converting open-system data in FBA format
the failed controller are usually failed back to mainframe data in CKD format.
to the replacement controller to restore the FBUS — Fast I/O Bus.
I/O balance, and to restore failure tolerance.
FC ― Fibre Channel or Field-Change (microcode
Similarly, when a defective fan or power
update). A technology for transmitting data
supply is replaced, its load, previously borne
between computer devices; a set of
by a redundant component, can be failed
standards for a serial I/O bus capable of
back to the replacement part.
transferring data between 2 ports.
Failed over — A mode of operation for failure-
FC RKAJ — Fibre Channel Rack Additional.
tolerant systems in which a component has
Module system acronym refers to an
failed and its function has been assumed by
additional rack unit that houses additional
a redundant component. A system that
hard drives exceeding the capacity of the
protects against single failures operating in
core RK unit.
failed over mode is not failure tolerant, as
failure of the redundant component may FC-0 ― Lowest layer on Fibre Channel transport.
render the system unable to function. Some This layer represents the physical media.
systems (for example, clusters) are able to FC-1 ― This layer contains the 8b/10b encoding
tolerate more than 1 failure; these remain scheme.
failure tolerant until no redundant FC-2 ― This layer handles framing and protocol,
component is available to protect against frame format, sequence/exchange
further failures. management and ordered set usage.
Failover — A backup operation that automatically FC-3 ― This layer contains common services used
switches to a standby database server or by multiple N_Ports in a node.
network if the primary system fails, or is FC-4 ― This layer handles standards and profiles
temporarily shut down for servicing. Failover for mapping upper level protocols like SCSI
is an important fault tolerance function of an IP onto the Fibre Channel Protocol.
mission-critical systems that rely on constant FCA ― Fibre Channel Adapter. Fibre interface
accessibility. Also called path failover. card. Controls transmission of fibre packets.
Failure tolerance — The ability of a system to FC-AL — Fibre Channel Arbitrated Loop. A serial
continue to perform its function or at a data transfer architecture developed by a
reduced performance level, when 1 or more consortium of computer and mass storage
of its components has failed. Failure device manufacturers, and is now being
tolerance in disk subsystems is often standardized by ANSI. FC-AL was designed

HDS Confidential: For distribution only to authorized parties. Page G-9


for new mass storage devices and other physical link rates to make them up to 8
peripheral devices that require very high times as efficient as ESCON (Enterprise
bandwidth. Using optical fiber to connect System Connection), IBM's previous fiber
devices, FC-AL supports full-duplex data optic channel standard.
transfer rates of 100MB/sec. FC-AL is FIPP — Fair Information Practice Principles.
compatible with SCSI for high-performance Guidelines for the collection and use of
storage systems. personal information created by the United
FCC — Federal Communications Commission. States Federal Trade Commission (FTC).
FCIP — Fibre Channel over IP. A network storage FISMA — Federal Information Security
technology that combines the features of Management Act of 2002. A major
Fibre Channel and the Internet Protocol (IP) compliance and privacy protection law that
to connect distributed SANs over large applies to information systems and cloud
distances. FCIP is considered a tunneling computing. Enacted in the United States of
protocol, as it makes a transparent point-to- America in 2002.
point connection between geographically FLGFAN ― Front Logic Box Fan Assembly.
separated SANs over IP networks. FCIP
relies on TCP/IP services to establish FLOGIC Box ― Front Logic Box.
connectivity between remote SANs over FM — Flash Memory. Each microprocessor has
LANs, MANs, or WANs. An advantage of FM. FM is non-volatile memory that contains
FCIP is that it can use TCP/IP as the microcode.
transport while keeping Fibre Channel fabric FOP — Fibre Optic Processor or fibre open.
services intact.
FQDN — Fully Qualified Domain Name.
FCoE – Fibre Channel over Ethernet. An
encapsulation of Fibre Channel frames over FPC — Failure Parts Code or Fibre Channel
Ethernet networks. Protocol Chip.
FCP — Fibre Channel Protocol. FPGA — Field Programmable Gate Array.
FC-P2P — Fibre Channel Point-to-Point. Frames — An ordered vector of words that is the
FCSE — Flashcopy Space Efficiency. basic unit of data transmission in a Fibre
FC-SW — Fibre Channel Switched. Channel network.
FCU— File Conversion Utility. Front end — In client/server applications, the
FD — Floppy Disk or Floppy Drive. client part of the program is often called the
front end and the server part is called the
FDDI — Fiber Distributed Data Interface.
back end.
FDR — Fast Dump/Restore.
FRU — Field Replaceable Unit.
FE — Field Engineer.
FS — File System.
FED — (Channel) Front End Director.
FedRAMP – Federal Risk and Authorization FSA — File System Module-A.
Management Program. FSB — File System Module-B.
Fibre Channel — A serial data transfer FSI — Financial Services Industries.
architecture developed by a consortium of
FSM — File System Module.
computer and mass storage device
manufacturers and now being standardized FSW ― Fibre Channel Interface Switch PCB. A
by ANSI. The most prominent Fibre Channel board that provides the physical interface
standard is Fibre Channel Arbitrated Loop (cable connectors) between the ACP ports
(FC-AL). and the disks housed in a given disk drive.
FICON — Fiber Connectivity. A high-speed FTP ― File Transfer Protocol. A client-server
input/output (I/O) interface for mainframe protocol that allows a user on 1 computer to
computer connections to storage devices. As transfer files to and from another computer
part of IBM's S/390 server, FICON channels over a TCP/IP network.
increase I/O capacity through the FWD — Fast Write Differential.
combination of a new architecture and faster -back to top-

Page G-10 HDS Confidential: For distribution only to authorized parties.


—G— only 1 H2F that can be added to the core RK
Floor Mounted unit. See also: RK, RKA, and
GA — General availability. H1F.
GARD — General Available Restricted HA — High Availability.
Distribution.
Hadoop — Apache Hadoop is an open-source
Gb — Gigabit. software framework for data storage and
GB — Gigabyte. large-scale processing of data-sets on
Gb/sec — Gigabit per second. clusters of hardware.
GB/sec — Gigabyte per second. HANA — High Performance Analytic Appliance,
a database appliance technology proprietary
GbE — Gigabit Ethernet.
to SAP.
Gbps — Gigabit per second.
HBA — Host Bus Adapter — An I/O adapter that
GBps — Gigabyte per second. sits between the host computer's bus and the
GBIC — Gigabit Interface Converter. Fibre Channel loop and manages the transfer
of information between the 2 channels. In
GCMI — Global Competitive and Marketing
order to minimize the impact on host
Intelligence (Hitachi).
processor performance, the host bus adapter
GDG — Generation Data Group. performs many low-level interface functions
GDPS — Geographically Dispersed Parallel automatically or with minimal processor
Sysplex. involvement.
GID — Group Identifier within the UNIX security HCA — Host Channel Adapter.
model. HCD — Hardware Configuration Definition.
gigE — Gigabit Ethernet. HD — Hard Disk.
GLM — Gigabyte Link Module. HDA — Head Disk Assembly.
Global Cache — Cache memory is used on demand HDD ― Hard Disk Drive. A spindle of hard disk
by multiple applications. Use changes platters that make up a hard drive, which is
dynamically, as required for READ a unit of physical storage within a
performance between hosts/applications/LUs. subsystem.
GPFS — General Parallel File System. HDDPWR — Hard Disk Drive Power.
GSC — Global Support Center. HDU ― Hard Disk Unit. A number of hard drives
(HDDs) grouped together within a
GSI — Global Systems Integrator.
subsystem.
GSS — Global Solution Services.
Head — See read/write head.
GSSD — Global Solutions Strategy and
Heterogeneous — The characteristic of containing
Development.
dissimilar elements. A common use of this
GSW — Grid Switch Adapter. Also known as E word in information technology is to
Switch (Express Switch). describe a product as able to contain or be
GUI — Graphical User Interface. part of a “heterogeneous network,"
consisting of different manufacturers'
GUID — Globally Unique Identifier.
products that can interoperate.
-back to top-
Heterogeneous networks are made possible by
—H— standards-conforming hardware and
H1F — Essentially the floor-mounted disk rack software interfaces used in common by
(also called desk side) equivalent of the RK. different products, thus allowing them to
(See also: RK, RKA, and H2F). communicate with each other. The Internet
itself is an example of a heterogeneous
H2F — Essentially the floor-mounted disk rack
network.
(also called desk side) add-on equivalent
similar to the RKA. There is a limitation of HiCAM — Hitachi Computer Products America.

HDS Confidential: For distribution only to authorized parties. Page G-11


HIPAA — Health Insurance Portability and infrastructure, operations and applications)
Accountability Act. in a coordinated fashion to assemble a
HIS — (1) High Speed Interconnect. (2) Hospital particular solution.” — Source: Gartner
Information System (clinical and financial). Research.
Hybrid Network Cloud — A composition of 2 or
HiStar — Multiple point-to-point data paths to
cache. more clouds (private, community or public).
Each cloud remains a unique entity but they
HL7 — Health Level 7. are bound together. A hybrid network cloud
HLQ — High-level Qualifier. includes an interconnection.
HLS — Healthcare and Life Sciences. Hypervisor — Also called a virtual machine
manager, a hypervisor is a hardware
HLU — Host Logical Unit.
virtualization technique that enables
H-LUN — Host Logical Unit Number. See LUN. multiple operating systems to run
HMC — Hardware Management Console. concurrently on the same computer.
Hypervisors are often installed on server
Homogeneous — Of the same or similar kind.
hardware then run the guest operating
Host — Also called a server. Basically a central systems that act as servers.
computer that processes end-user
Hypervisor can also refer to the interface
applications or requests.
that is provided by Infrastructure as a Service
Host LU — Host Logical Unit. See also HLU. (IaaS) in cloud computing.
Host Storage Domains — Allows host pooling at Leading hypervisors include VMware
the LUN level and the priority access feature vSphere Hypervisor™ (ESXi), Microsoft®
lets administrator set service levels for Hyper-V and the Xen® hypervisor.
applications. -back to top-
HP — (1) Hewlett-Packard Company or (2) High
Performance.
HPC — High Performance Computing. —I—
HSA — Hardware System Area. I/F — Interface.
HSG — Host Security Group. I/O — Input/Output. Term used to describe any
HSM — Hierarchical Storage Management (see program, operation, or device that transfers
Data Migrator). data to or from a computer and to or from a
peripheral device.
HSN — Hierarchical Star Network.
IaaS —Infrastructure as a Service. A cloud
HSSDC — High Speed Serial Data Connector.
computing business model — delivering
HTTP — Hyper Text Transfer Protocol. computer infrastructure, typically a platform
HTTPS — Hyper Text Transfer Protocol Secure. virtualization environment, as a service,
Hub — A common connection point for devices in along with raw (block) storage and
a network. Hubs are commonly used to networking. Rather than purchasing servers,
connect segments of a LAN. A hub contains software, data center space or network
multiple ports. When a packet arrives at 1 equipment, clients buy those resources as a
port, it is copied to the other ports so that all fully outsourced service. Providers typically
segments of the LAN can see all packets. A bill such services on a utility computing
switching hub actually reads the destination basis; the amount of resources consumed
address of each packet and then forwards (and therefore the cost) will typically reflect
the packet to the correct port. Device to the level of activity.
which nodes on a multi-point bus or loop are IDE — Integrated Drive Electronics Advanced
physically connected. Technology. A standard designed to connect
Hybrid Cloud — “Hybrid cloud computing refers hard and removable disk drives.
to the combination of external public cloud IDN — Integrated Delivery Network.
computing services and internal resources
iFCP — Internet Fibre Channel Protocol.
(either a private cloud or traditional

Page G-12 HDS Confidential: For distribution only to authorized parties.


Index Cache — Provides quick access to indexed IOC — I/O controller.
data on the media during a browse\restore IOCDS — I/O Control Data Set.
operation.
IODF — I/O Definition file.
IBR — Incremental Block-level Replication or
IOPH — I/O per hour.
Intelligent Block Replication.
IOPS – I/O per second.
ICB — Integrated Cluster Bus.
IOS — I/O Supervisor.
ICF — Integrated Coupling Facility.
IOSQ — Input/Output Subsystem Queue.
ID — Identifier.
IP — Internet Protocol. The communications
IDR — Incremental Data Replication. protocol that routes traffic across the
iFCP — Internet Fibre Channel Protocol. Allows Internet.
an organization to extend Fibre Channel IPv6 — Internet Protocol Version 6. The latest
storage networks over the Internet by using revision of the Internet Protocol (IP).
TCP/IP. TCP is responsible for managing IPL — Initial Program Load.
congestion control as well as error detection
IPSEC — IP security.
and recovery services.
IRR — Internal Rate of Return.
iFCP allows an organization to create an IP
SAN fabric that minimizes the Fibre Channel ISC — Initial shipping condition or Inter-System
fabric component and maximizes use of the Communication.
company's TCP/IP infrastructure. iSCSI — Internet SCSI. Pronounced eye skuzzy.
An IP-based standard for linking data
IFL — Integrated Facility for LINUX.
storage devices over a network and
IHE — Integrating the Healthcare Enterprise. transferring data by carrying SCSI
IID — Initiator ID. commands over IP networks.
IIS — Internet Information Server. ISE — Integrated Scripting Environment.
ILM — Information Life Cycle Management. iSER — iSCSI Extensions for RDMA.
ILO — (Hewlett-Packard) Integrated Lights-Out. ISL — Inter-Switch Link.

IML — Initial Microprogram Load. iSNS — Internet Storage Name Service.


ISOE — iSCSI Offload Engine.
IMS — Information Management System.
ISP — Internet service provider.
In-Band Virtualization — Refers to the location of
the storage network path, between the ISPF — Interactive System Productivity Facility.
application host servers in the storage ISPF/PDF — Interactive System Productivity
systems. Provides both control and data Facility/Program Development Facility.
along the same connection path. Also called ISV — Independent Software Vendor.
symmetric virtualization. ITaaS — IT as a Service. A cloud computing
INI — Initiator. business model. This general model is an
Interface —The physical and logical arrangement umbrella model that entails the SPI business
supporting the attachment of any device to a model (SaaS, PaaS and IaaS — Software,
connector or to another device. Platform and Infrastructure as a Service).
Internal Bus — Another name for an internal data ITSC — Informaton and Telecommunications
bus. Also, an expansion bus is often referred Systems Companies.
to as an internal bus. -back to top-

Internal Data Bus — A bus that operates only —J—


within the internal circuitry of the CPU,
Java — A widely accepted, open systems
communicating among the internal caches of
programming language. Hitachi’s enterprise
memory that are part of the CPU chip’s
software products are all accessed using Java
design. This bus is typically rather quick and
applications. This enables storage
is independent of the rest of the computer’s
administrators to access the Hitachi
operations.

HDS Confidential: For distribution only to authorized parties. Page G-13


enterprise software products from any PC or (all or portions of 1 or more disks) that are
workstation that runs a supported thin-client combined so that the subsystem sees and
internet browser application and that has treats them as a single area of data storage.
TCP/IP network access to the computer on Also called a volume. An LDEV has a
which the software product runs. specific and unique address within a
Java VM — Java Virtual Machine. subsystem. LDEVs become LUNs to an
open-systems host.
JBOD — Just a Bunch of Disks.
JCL — Job Control Language. LDKC — Logical Disk Controller or Logical Disk
Controller Manual.
JMP —Jumper. Option setting method.
LDM — Logical Disk Manager.
JMS — Java Message Service.
LDS — Linear Data Set.
JNL — Journal.
JNLG — Journal Group. LED — Light Emitting Diode.

JRE —Java Runtime Environment. LFF — Large Form Factor.


JVM — Java Virtual Machine. LIC — Licensed Internal Code.
J-VOL — Journal Volume. LIS — Laboratory Information Systems.
-back to top- LLQ — Lowest Level Qualifier.

—K— LM — Local Memory.

KSDS — Key Sequence Data Set. LMODs — Load Modules.

kVA— Kilovolt Ampere. LNKLST — Link List.

KVM — Kernel-based Virtual Machine or Load balancing — The process of distributing


Keyboard-Video Display-Mouse. processing and communications activity
evenly across a computer network so that no
kW — Kilowatt. single device is overwhelmed. Load
-back to top- balancing is especially important for
networks where it is difficult to predict the
—L— number of requests that will be issued to a
LACP — Link Aggregation Control Protocol. server. If 1 server starts to be swamped,
LAG — Link Aggregation Groups. requests are forwarded to another server
with more capacity. Load balancing can also
LAN — Local Area Network. A communications
refer to the communications channels
network that serves clients within a
themselves.
geographical area, such as a building.
LOC — “Locations” section of the Maintenance
LBA — Logical block address. A 28-bit value that
Manual.
maps to a specific cylinder-head-sector
address on the disk. Logical DKC (LDKC) — Logical Disk Controller
Manual. An internal architecture extension
LC — Lucent connector. Fibre Channel connector
to the Control Unit addressing scheme that
that is smaller than a simplex connector (SC).
allows more LDEVs to be identified within 1
LCDG — Link Processor Control Diagnostics. Hitachi enterprise storage system.
LCM — Link Control Module. Longitudinal record —Patient information from
LCP — Link Control Processor. Controls the birth to death.
optical links. LCP is located in the LCM. LPAR — Logical Partition (mode).
LCSS — Logical Channel Subsystems. LR — Local Router.
LCU — Logical Control Unit. LRECL — Logical Record Length.
LD — Logical Device. LRP — Local Router Processor.
LDAP — Lightweight Directory Access Protocol. LRU — Least Recently Used.
LDEV ― Logical Device or Logical Device
(number). A set of physical disk partitions

Page G-14 HDS Confidential: For distribution only to authorized parties.


LSS — Logical Storage Subsystem (equivalent to Control Unit. The local CU of a remote copy
LCU). pair. Main or Master Control Unit.
LU — Logical Unit. Mapping number of an LDEV. MCU — Master Control Unit.
LUN ― Logical Unit Number. 1 or more LDEVs. MDPL — Metadata Data Protection Level.
Used only for open systems. MediaAgent — The workhorse for all data
LUSE ― Logical Unit Size Expansion. Feature used movement. MediaAgent facilitates the
to create virtual LUs that are up to 36 times transfer of data between the data source, the
larger than the standard OPEN-x LUs. client computer, and the destination storage
media.
LVDS — Low Voltage Differential Signal
Metadata — In database management systems,
LVI — Logical Volume Image. Identifies a similar data files are the files that store the database
concept (as LUN) in the mainframe information; whereas other files, such as
environment. index files and data dictionaries, store
LVM — Logical Volume Manager. administrative information, known as
-back to top- metadata.
MFC — Main Failure Code.
—M— MG — (1) Module Group. 2 (DIMM) cache
MAC — Media Access Control. A MAC address is memory modules that work together. (2)
a unique identifier attached to most forms of Migration Group. A group of volumes to be
networking equipment. migrated together.
MAID — Massive array of disks. MGC — (3-Site) Metro/Global Mirror.
MAN — Metropolitan Area Network. A MIB — Management Information Base. A database
communications network that generally of objects that can be monitored by a
covers a city or suburb. MAN is very similar network management system. Both SNMP
to a LAN except it spans across a and RMON use standardized MIB formats
geographical region such as a state. Instead that allow any SNMP and RMON tools to
of the workstations in a LAN, the monitor any device defined by a MIB.
workstations in a MAN could depict Microcode — The lowest-level instructions that
different cities in a state. For example, the directly control a microprocessor. A single
state of Texas could have: Dallas, Austin, San machine-language instruction typically
Antonio. The city could be a separate LAN translates into several microcode
and all the cities connected together via a instructions.
switch. This topology would indicate a
Fortan Pascal C
MAN.
High-level Language
MAPI — Management Application Programming
Assembly Language
Interface.
Machine Language
Mapping — Conversion between 2 data
Hardware
addressing spaces. For example, mapping
refers to the conversion between physical
Microprogram — See Microcode.
disk block addresses and the block addresses
of the virtual disks presented to operating MIF — Multiple Image Facility.
environments by control software. Mirror Cache OFF — Increases cache efficiency
Mb — Megabit. over cache data redundancy.
MB — Megabyte. M-JNL — Primary journal volumes.
MBA — Memory Bus Adaptor. MM — Maintenance Manual.
MBUS — Multi-CPU Bus. MMC — Microsoft Management Console.
MC — Multi Cabinet. Mode — The state or setting of a program or
device. The term mode implies a choice,
MCU — Main Control Unit, Master Control Unit,
which is that you can change the setting and
Main Disk Control Unit or Master Disk
put the system in a different mode.

HDS Confidential: For distribution only to authorized parties. Page G-15


MP — Microprocessor. NFS protocol — Network File System is a protocol
MPA — Microprocessor adapter. that allows a computer to access files over a
network as easily as if they were on its local
MPB – Microprocessor board.
disks.
MPI — (Electronic) Master Patient Identifier. Also
NIM — Network Interface Module.
known as EMPI.
MPIO — Multipath I/O. NIS — Network Information Service (originally
called the Yellow Pages or YP).
MP PK – MP Package.
NIST — National Institute of Standards and
MPU — Microprocessor Unit.
Technology. A standards organization active
MQE — Metadata Query Engine (Hitachi). in cloud computing.
MS/SG — Microsoft Service Guard. NLS — Native Language Support.
MSCS — Microsoft Cluster Server. Node ― An addressable entity connected to an
MSS — (1) Multiple Subchannel Set. (2) Managed I/O bus or network, used primarily to refer
Security Services. to computers, storage devices and storage
subsystems. The component of a node that
MTBF — Mean Time Between Failure.
connects to the bus or network is a port.
MTS — Multitiered Storage.
Node name ― A Name_Identifier associated with
Multitenancy — In cloud computing, a node.
multitenancy is a secure way to partition the
infrastructure (application, storage pool and NPV — Net Present Value.
network) so multiple customers share a NRO — Network Recovery Objective.
single resource pool. Multitenancy is one of NTP — Network Time Protocol.
the key ways cloud can achieve massive
economy of scale. NVS — Non Volatile Storage.
-back to top-
M-VOL — Main Volume.
MVS — Multiple Virtual Storage. —O—
-back to top- OASIS – Organization for the Advancement of
Structured Information Standards.
—N—
OCC — Open Cloud Consortium. A standards
NAS ― Network Attached Storage. A disk array
organization active in cloud computing.
connected to a controller that gives access to
a LAN Transport. It handles data at the file OEM — Original Equipment Manufacturer.
level. OFC — Open Fibre Control.
NAT — Network Address Translation. OGF — Open Grid Forum. A standards
NDMP — Network Data Management Protocol. A organization active in cloud computing.
protocol meant to transport data between OID — Object identifier.
NAS devices.
OLA — Operating Level Agreements.
NetBIOS — Network Basic Input/Output System.
OLTP — On-Line Transaction Processing.
Network — A computer system that allows
OLTT — Open-loop throughput throttling.
sharing of resources, such as files and
peripheral hardware devices. OMG — Object Management Group. A standards
organization active in cloud computing.
Network Cloud — A communications network.
The word "cloud" by itself may refer to any On/Off CoD — On/Off Capacity on Demand.
local area network (LAN) or wide area ONODE — Object node.
network (WAN). The terms “computing"
OpenStack – An open source project to provide
and "cloud computing" refer to services
orchestration and provisioning for cloud
offered on the public Internet or to a private
environments based on a variety of different
network that uses the same protocols as a
hypervisors.
standard network. See also cloud computing.

Page G-16 HDS Confidential: For distribution only to authorized parties.


OPEX — Operational Expenditure. This is an multiple partitions. Then customize the
operating expense, operating expenditure, partition to match the I/O characteristics of
operational expense, or operational assigned LUs.
expenditure, which is an ongoing cost for PAT — Port Address Translation.
running a product, business, or system. Its
counterpart is a capital expenditure (CAPEX). PATA — Parallel ATA.

ORM — Online Read Margin. Path — Also referred to as a transmission channel,


the path between 2 nodes of a network that a
OS — Operating System. data communication follows. The term can
Out-of-Band Virtualization — Refers to systems refer to the physical cabling that connects the
where the controller is located outside of the nodes on a network, the signal that is
SAN data path. Separates control and data communicated over the pathway or a sub-
on different connection paths. Also called channel in a carrier frequency.
asymmetric virtualization. Path failover — See Failover.
-back to top-
PAV — Parallel Access Volumes.
—P— PAWS — Protect Against Wrapped Sequences.
P-2-P — Point to Point. Also P-P. PB — Petabyte.
PaaS — Platform as a Service. A cloud computing PBC — Port Bypass Circuit.
business model — delivering a computing PCB — Printed Circuit Board.
platform and solution stack as a service. PCHIDS — Physical Channel Path Identifiers.
PaaS offerings facilitate deployment of
PCI — Power Control Interface.
applications without the cost and complexity
of buying and managing the underlying PCI CON — Power Control Interface Connector
hardware, software and provisioning Board.
hosting capabilities. PaaS provides all of the PCI DSS — Payment Card Industry Data Security
facilities required to support the complete Standard.
life cycle of building and delivering web PCIe — Peripheral Component Interconnect
applications and services entirely from the Express.
Internet.
PD — Product Detail.
PACS – Picture Archiving and Communication PDEV— Physical Device.
System.
PDM — Policy based Data Migration or Primary
PAN — Personal Area Network. A Data Migrator.
communications network that transmit data
PDS — Partitioned Data Set.
wirelessly over a short distance. Bluetooth
and Wi-Fi Direct are examples of personal PDSE — Partitioned Data Set Extended.
area networks. Performance — Speed of access or the delivery of
PAP — Password Authentication Protocol. information.
Petabyte (PB) — A measurement of capacity — the
Parity — A technique of checking whether data
amount of data that a drive or storage
has been lost or written over when it is
system can store after formatting. 1PB =
moved from one place in storage to another
1,024TB.
or when it is transmitted between
computers. PFA — Predictive Failure Analysis.
Parity Group — Also called an array group. This is PFTaaS — Private File Tiering as a Service. A cloud
a group of hard disk drives (HDDs) that computing business model.
form the basic unit of storage in a subsystem. PGP — Pretty Good Privacy. A data encryption
All HDDs in a parity group must have the and decryption computer program used for
same physical capacity. increasing the security of email
Partitioned cache memory — Separate workloads communications.
in a “storage consolidated” system by PGR — Persistent Group Reserve.
dividing cache into individually managed

HDS Confidential: For distribution only to authorized parties. Page G-17


PI — Product Interval. Provisioning — The process of allocating storage
PIR — Performance Information Report. resources and assigning storage capacity for
an application, usually in the form of server
PiT — Point-in-Time.
disk drive space, in order to optimize the
PK — Package (see PCB). performance of a storage area network
PL — Platter. The circular disk on which the (SAN). Traditionally, this has been done by
magnetic data is stored. Also called the SAN administrator, and it can be a
motherboard or backplane. tedious process. In recent years, automated
PM — Package Memory. storage provisioning (also called auto-
provisioning) programs have become
POC — Proof of concept.
available. These programs can reduce the
Port — In TCP/IP and UDP networks, an time required for the storage provisioning
endpoint to a logical connection. The port process, and can free the administrator from
number identifies what type of port it is. For the often distasteful task of performing this
example, port 80 is used for HTTP traffic. chore manually.
POSIX — Portable Operating System Interface for PS — Power Supply.
UNIX. A set of standards that defines an
PSA — Partition Storage Administrator .
application programming interface (API) for
software designed to run under PSSC — Perl Silicon Server Control.
heterogeneous operating systems. PSU — Power Supply Unit.
PP — Program product. PTAM — Pickup Truck Access Method.
P-P — Point-to-point; also P2P. PTF — Program Temporary Fixes.
PPRC — Peer-to-Peer Remote Copy. PTR — Pointer.
Private Cloud — A type of cloud computing PU — Processing Unit.
defined by shared capabilities within a Public Cloud — Resources, such as applications
single company; modest economies of scale and storage, available to the general public
and less automation. Infrastructure and data over the Internet.
reside inside the company’s data center
P-VOL — Primary Volume.
behind a firewall. Comprised of licensed
-back to top-
software tools rather than on-going services.
—Q—
Example: An organization implements its QD — Quorum Device.
own virtual, scalable cloud and business
units are charged on a per use basis. QDepth — The number of I/O operations that can
run in parallel on a SAN device; also WWN
Private Network Cloud — A type of cloud
QDepth.
network with 3 characteristics: (1) Operated
solely for a single organization, (2) Managed QoS — Quality of Service. In the field of computer
internally or by a third-party, (3) Hosted networking, the traffic engineering term
internally or externally. quality of service (QoS) refers to resource
reservation control mechanisms rather than
PR/SM — Processor Resource/System Manager. the achieved service quality. Quality of
Protocol — A convention or standard that enables service is the ability to provide different
the communication between 2 computing priority to different applications, users, or
endpoints. In its simplest form, a protocol data flows, or to guarantee a certain level of
can be defined as the rules governing the performance to a data flow.
syntax, semantics and synchronization of
QSAM — Queued Sequential Access Method.
communication. Protocols may be
-back to top-
implemented by hardware, software or a
combination of the 2. At the lowest level, a —R—
protocol defines the behavior of a hardware RACF — Resource Access Control Facility.
connection.
RAID ― Redundant Array of Independent Disks,
or Redundant Array of Inexpensive Disks. A

Page G-18 HDS Confidential: For distribution only to authorized parties.


group of disks that look like a single volume telecommunication links that are installed to
to the server. RAID improves performance back up primary resources in case they fail.
by pulling a single stripe of data from
multiple disks, and improves fault-tolerance A well-known example of a redundant
either through mirroring or parity checking system is the redundant array of
and it is a component of a customer’s SLA. independent disks (RAID). Redundancy
contributes to the fault tolerance of a system.
RAID-0 — Striped array with no parity.
RAID-1 — Mirrored array and duplexing. Redundancy — Backing up a component to help
ensure high availability.
RAID-3 — Striped array with typically non-
rotating parity, optimized for long, single- Reliability — (1) Level of assurance that data will
threaded transfers. not be lost or degraded over time. (2) An
attribute of any commuter component
RAID-4 — Striped array with typically non-
(software, hardware or a network) that
rotating parity, optimized for short, multi-
consistently performs according to its
threaded transfers.
specifications.
RAID-5 — Striped array with typically rotating
REST — Representational State Transfer.
parity, optimized for short, multithreaded
transfers. REXX — Restructured extended executor.
RAID-6 — Similar to RAID-5, but with dual RID — Relative Identifier that uniquely identifies
rotating parity physical disks, tolerating 2 a user or group within a Microsoft Windows
physical disk failures. domain.
RAIN — Redundant (or Reliable) Array of RIS — Radiology Information System.
Independent Nodes (architecture). RISC — Reduced Instruction Set Computer.
RAM — Random Access Memory.
RIU — Radiology Imaging Unit.
RAM DISK — A LUN held entirely in the cache
R-JNL — Secondary journal volumes.
area.
RAS — Reliability, Availability, and Serviceability RK — Rack additional.
or Row Address Strobe. RKAJAT — Rack Additional SATA disk tray.
RBAC — Role Base Access Control. RKAK — Expansion unit.
RC — (1) Reference Code or (2) Remote Control. RLGFAN — Rear Logic Box Fan Assembly.
RCHA — RAID Channel Adapter. RLOGIC BOX — Rear Logic Box.
RCP — Remote Control Processor. RMF — Resource Measurement Facility.
RCU — Remote Control Unit or Remote Disk RMI — Remote Method Invocation. A way that a
Control Unit. programmer, using the Java programming
RCUT — RCU Target. language and development environment,
can write object-oriented programming in
RD/WR — Read/Write. which objects on different computers can
RDM — Raw Disk Mapped. interact in a distributed network. RMI is the
RDMA — Remote Direct Memory Access. Java version of what is generally known as a
RPC (remote procedure call), but with the
RDP — Remote Desktop Protocol.
ability to pass 1 or more objects along with
RDW — Record Descriptor Word. the request.
Read/Write Head — Read and write data to the RndRD — Random read.
platters, typically there is 1 head per platter ROA — Return on Asset.
side, and each head is attached to a single
actuator shaft. RoHS — Restriction of Hazardous Substances (in
Electrical and Electronic Equipment).
RECFM — Record Format Redundant. Describes
the computer or network system ROI — Return on Investment.
components, such as fans, hard disk drives, ROM — Read Only Memory.
servers, operating systems, switches, and

HDS Confidential: For distribution only to authorized parties. Page G-19


Round robin mode — A load balancing technique delivery model for most business
which distributes data packets equally applications, including accounting (CRM
among the available paths. Round robin and ERP), invoicing (HRM), content
DNS is usually used for balancing the load management (CM) and service desk
of geographically distributed Web servers. It management, just to name the most common
works on a rotating basis in that one server software that runs in the cloud. This is the
IP address is handed out, then moves to the fastest growing service in the cloud market
back of the list; the next server IP address is today. SaaS performs best for relatively
handed out, and then it moves to the end of simple tasks in IT-constrained organizations.
the list; and so on, depending on the number SACK — Sequential Acknowledge.
of servers being used. This works in a
looping fashion. SACL — System ACL. The part of a security
descriptor that stores system auditing
Router — A computer networking device that information.
forwards data packets toward their
destinations, through a process known as SAIN — SAN-attached Array of Independent
routing. Nodes (architecture).

RPC — Remote procedure call. SAN ― Storage Area Network. A network linking
computing devices to disk or tape arrays and
RPO — Recovery Point Objective. The point in other devices over Fibre Channel. It handles
time that recovered data should match. data at the block level.
RPSFAN — Rear Power Supply Fan Assembly. SAP — (1) System Assist Processor (for I/O
RRDS — Relative Record Data Set. processing), or (2) a German software
RS CON — RS232C/RS422 Interface Connector. company.

RSD — RAID Storage Division (of Hitachi). SAP HANA — High Performance Analytic
Appliance, a database appliance technology
R-SIM — Remote Service Information Message. proprietary to SAP.
RSM — Real Storage Manager. SARD — System Assurance Registration
RTM — Recovery Termination Manager. Document.
RTO — Recovery Time Objective. The length of SAS —Serial Attached SCSI.
time that can be tolerated between a disaster SATA — Serial ATA. Serial Advanced Technology
and recovery of data. Attachment is a new standard for connecting
R-VOL — Remote Volume. hard drives into computer systems. SATA is
R/W — Read/Write. based on serial signaling technology, unlike
current IDE (Integrated Drive Electronics)
-back to top-
hard drives that use parallel signaling.
—S— SBM — Solutions Business Manager.
SA — Storage Administrator. SBOD — Switched Bunch of Disks.
SA z/OS — System Automation for z/OS. SBSC — Smart Business Storage Cloud.
SAA — Share Access Authentication. The process SBX — Small Box (Small Form Factor).
of restricting a user's rights to a file system
SC — (1) Simplex connector. Fibre Channel
object by combining the security descriptors
connector that is larger than a Lucent
from both the file system object itself and the
connector (LC). (2) Single Cabinet.
share to which the user is connected.
SCM — Supply Chain Management.
SaaS — Software as a Service. A cloud computing
SCP — Secure Copy.
business model. SaaS is a software delivery
model in which software and its associated SCSI — Small Computer Systems Interface. A
data are hosted centrally in a cloud and are parallel bus architecture and a protocol for
typically accessed by users using a thin transmitting large data blocks up to a
client, such as a web browser via the distance of 15 to 25 meters.
Internet. SaaS has become a common SD — Software Division (of Hitachi).

Page G-20 HDS Confidential: For distribution only to authorized parties.


SDH — Synchronous Digital Hierarchy. • Specific performance benchmarks to
SDM — System Data Mover. which actual performance will be
periodically compared
SDO – Standards Development Organizations (a
general category). • The schedule for notification in advance of
network changes that may affect users
SDSF — Spool Display and Search Facility.
Sector — A sub-division of a track of a magnetic • Help desk response time for various
disk that stores a fixed amount of data. classes of problems

SEL — System Event Log. • Dial-in access availability


Selectable Segment Size — Can be set per • Usage statistics that will be provided
partition. Service-Level Objective — SLO. Individual
Selectable Stripe Size — Increases performance by performance metrics built into an SLA. Each
customizing the disk access size. SLO corresponds to a single performance
characteristic relevant to the delivery of an
SENC — Is the SATA (Serial ATA) version of the
overall service. Some examples of SLOs
ENC. ENCs and SENCs are complete
include: system availability, help desk
microprocessor systems on their own and
incident resolution time, and application
they occasionally require a firmware
response time.
upgrade.
SeqRD — Sequential read. SES — SCSI Enclosure Services.

Serial Transmission — The transmission of data SFF — Small Form Factor.


bits in sequential order over a single line. SFI — Storage Facility Image.
Server — A central computer that processes SFM — Sysplex Failure Management.
end-user applications or requests, also called
SFP — Small Form-Factor Pluggable module Host
a host.
connector. A specification for a new
Server Virtualization — The masking of server generation of optical modular transceivers.
resources, including the number and identity The devices are designed for use with small
of individual physical servers, processors, form factor (SFF) connectors, offer high
and operating systems, from server users. speed and physical compactness and are
The implementation of multiple isolated hot-swappable.
virtual environments in one physical server.
SHSN — Shared memory Hierarchical Star
Service-level Agreement — SLA. A contract Network.
between a network service provider and a
SID — Security Identifier. A user or group
customer that specifies, usually in
identifier within the Microsoft Windows
measurable terms, what services the network
security model.
service provider will furnish. Many Internet
service providers (ISP) provide their SIGP — Signal Processor.
customers with a SLA. More recently, IT SIM — (1) Service Information Message. A
departments in major enterprises have message reporting an error that contains fix
adopted the idea of writing a service level guidance information. (2) Storage Interface
agreement so that services for their Module. (3) Subscriber Identity Module.
customers (users in other departments SIM RC — Service (or system) Information
within the enterprise) can be measured, Message Reference Code.
justified, and perhaps compared with those
SIMM — Single In-line Memory Module.
of outsourcing network providers.
SLA —Service Level Agreement.
Some metrics that SLAs may specify include:
SLO — Service Level Objective.
• The percentage of the time services will be
SLRP — Storage Logical Partition.
available
SM ― Shared Memory or Shared Memory Module.
• The number of users that can be served
Stores the shared information about the
simultaneously
subsystem and the cache control information
(director names). This type of information is

HDS Confidential: For distribution only to authorized parties. Page G-21


used for the exclusive control of the can send and receive TCP/IP messages by
subsystem. Like CACHE, shared memory is opening a socket and reading and writing
controlled as 2 areas of memory and fully non- data to and from the socket. This simplifies
volatile (sustained for approximately 7 days). program development because the
SM PATH— Shared Memory Access Path. The programmer need only worry about
Access Path from the processors of CHA, manipulating the socket and can rely on the
DKA PCB to Shared Memory. operating system to actually transport
messages across the network correctly. Note
SMB/CIFS — Server Message Block
that a socket in this sense is completely soft;
Protocol/Common Internet File System.
it is a software object, not a physical
SMC — Shared Memory Control. component.
SME — Small and Medium Enterprise. SOM — System Option Mode.
SMF — System Management Facility. SONET — Synchronous Optical Network.
SMI-S — Storage Management Initiative SOSS — Service Oriented Storage Solutions.
Specification.
SPaaS — SharePoint as a Service. A cloud
SMP — Symmetric Multiprocessing. An IBM- computing business model.
licensed program used to install software
SPAN — Span is a section between 2 intermediate
and software changes on z/OS systems.
supports. See Storage pool.
SMP/E — System Modification
Spare — An object reserved for the purpose of
Program/Extended.
substitution for a like object in case of that
SMS — System Managed Storage. object's failure.
SMTP — Simple Mail Transfer Protocol. SPC — SCSI Protocol Controller.
SMU — System Management Unit. SpecSFS — Standard Performance Evaluation
Snapshot Image — A logical duplicated volume Corporation Shared File system.
(V-VOL) of the primary volume. It is an SPECsfs97 — Standard Performance Evaluation
internal volume intended for restoration. Corporation (SPEC) System File Server (sfs)
SNIA — Storage Networking Industry developed in 1997 (97).
Association. An association of producers and SPI model — Software, Platform and
consumers of storage networking products, Infrastructure as a service. A common term
whose goal is to further storage networking to describe the cloud computing “as a service”
technology and applications. Active in cloud business model.
computing.
SRA — Storage Replicator Adapter.
SNMP — Simple Network Management Protocol. SRDF/A — (EMC) Symmetrix Remote Data
A TCP/IP protocol that was designed for Facility Asynchronous.
management of networks over TCP/IP,
SRDF/S — (EMC) Symmetrix Remote Data
using agents and stations.
Facility Synchronous.
SOA — Service Oriented Architecture.
SRM — Site Recovery Manager.
SOAP — Simple Object Access Protocol. A way for
SSB — Sense Byte.
a program running in one kind of operating
system (such as Windows 2000) to SSC — SiliconServer Control.
communicate with a program in the same or SSCH — Start Subchannel.
another kind of an operating system (such as SSD — Solid-State Drive or Solid-State Disk.
Linux) by using the World Wide Web's
SSH — Secure Shell.
Hypertext Transfer Protocol (HTTP) and its
Extensible Markup Language (XML) as the SSID — Storage Subsystem ID or Subsystem
mechanisms for information exchange. Identifier.
Socket — In UNIX and some other operating SSL — Secure Sockets Layer.
systems, socket is a software object that SSPC — System Storage Productivity Center.
connects an application to a network SSUE — Split Suspended Error.
protocol. In UNIX, for example, a program

Page G-22 HDS Confidential: For distribution only to authorized parties.


SSUS — Split Suspend. TCO — Total Cost of Ownership.
SSVP — Sub Service Processor interfaces the SVP TCG – Trusted Computing Group.
to the DKC. TCP/IP — Transmission Control Protocol over
SSW — SAS Switch. Internet Protocol.
Sticky Bit — Extended UNIX mode bit that TDCONV — Trace Dump Converter. A software
prevents objects from being deleted from a program that is used to convert traces taken
directory by anyone other than the object's on the system into readable text. This
owner, the directory's owner or the root user. information is loaded into a special
Storage pooling — The ability to consolidate and spreadsheet that allows for further
manage storage resources across storage investigation of the data. More in-depth
system enclosures where the consolidation failure analysis.
of many appears as a single view. TDMF — Transparent Data Migration Facility.
STP — Server Time Protocol. Telco or TELCO — Telecommunications
STR — Storage and Retrieval Systems. Company.
Striping — A RAID technique for writing a file to TEP — Tivoli Enterprise Portal.
multiple disks on a block-by-block basis, Terabyte (TB) — A measurement of capacity, data
with or without parity. or data storage. 1TB = 1,024GB.
Subsystem — Hardware or software that performs TFS — Temporary File System.
a specific function within a larger system. TGTLIBs — Target Libraries.
SVC — Supervisor Call Interruption. THF — Front Thermostat.
SVC Interrupts — Supervisor calls. Thin Provisioning — Thin provisioning allows
S-VOL — (1) (ShadowImage) Source Volume for storage space to be easily allocated to servers
In-System Replication, or (2) (Universal on a just-enough and just-in-time basis.
Replicator) Secondary Volume. THR — Rear Thermostat.
SVP — Service Processor ― A laptop computer Throughput — The amount of data transferred
mounted on the control frame (DKC) and from 1 place to another or processed in a
used for monitoring, maintenance and specified amount of time. Data transfer rates
administration of the subsystem. for disk drives and networks are measured
Switch — A fabric device providing full in terms of throughput. Typically,
bandwidth per port and high-speed routing throughputs are measured in kb/sec,
of data via link-level addressing. Mb/sec and Gb/sec.
SWPX — Switching power supply. TID — Target ID.
SXP — SAS Expander. Tiered Storage — A storage strategy that matches
Symmetric Virtualization — See In-Band data classification to storage metrics. Tiered
Virtualization. storage is the assignment of different
categories of data to different types of
Synchronous — Operations that have a fixed time
storage media in order to reduce total
relationship to each other. Most commonly
storage cost. Categories may be based on
used to denote I/O operations that occur in
levels of protection needed, performance
time sequence, such as, a successor operation
requirements, frequency of use, and other
does not occur until its predecessor is
considerations. Since assigning data to
complete.
particular media may be an ongoing and
-back to top-
complex activity, some vendors provide
—T— software for automatically managing the
Target — The system component that receives a process based on a company-defined policy.
SCSI I/O command, an open device that Tiered Storage Promotion — Moving data
operates at the request of the initiator. between tiers of storage as their availability
TB — Terabyte. 1TB = 1,024GB. requirements change.
TCDO — Total Cost of Data Ownership. TLS — Tape Library System.

HDS Confidential: For distribution only to authorized parties. Page G-23


TLS — Transport Layer Security. secondary servers, set up protection and
TMP — Temporary or Test Management Program. perform failovers and failbacks.
TOD (or ToD) — Time Of Day. VCS — Veritas Cluster System.
TOE — TCP Offload Engine. VDEV — Virtual Device.
Topology — The shape of a network or how it is VDI — Virtual Desktop Infrastructure.
laid out. Topologies are either physical or VHD — Virtual Hard Disk.
logical.
VHDL — VHSIC (Very-High-Speed Integrated
TPC-R — Tivoli Productivity Center for
Circuit) Hardware Description Language.
Replication.
VHSIC — Very-High-Speed Integrated Circuit.
TPF — Transaction Processing Facility.
VI — Virtual Interface. A research prototype that
TPOF — Tolerable Points of Failure.
is undergoing active development, and the
Track — Circular segment of a hard disk or other details of the implementation may change
storage media. considerably. It is an application interface
Transfer Rate — See Data Transfer Rate. that gives user-level processes direct but
Trap — A program interrupt, usually an interrupt protected access to network interface cards.
caused by some exceptional situation in the This allows applications to bypass IP
user program. In most cases, the Operating processing overheads (for example, copying
System performs some action and then data, computing checksums) and system call
returns control to the program. overheads while still preventing 1 process
from accidentally or maliciously tampering
TSC — Tested Storage Configuration.
with or reading data being used by another.
TSO — Time Sharing Option.
Virtualization — Referring to storage
TSO/E — Time Sharing Option/Extended.
virtualization, virtualization is the
T-VOL — (ShadowImage) Target Volume for amalgamation of multiple network storage
In-System Replication. devices into what appears to be a single
-back to top- storage unit. Storage virtualization is often
—U— used in a SAN, and makes tasks such as
archiving, backup and recovery easier and
UA — Unified Agent. faster. Storage virtualization is usually
UBX — Large Box (Large Form Factor). implemented via software applications.
UCB — Unit Control Block.
UDP — User Datagram Protocol is 1 of the core There are many additional types of
protocols of the Internet protocol suite. virtualization.
Using UDP, programs on networked Virtual Private Cloud (VPC) — Private cloud
computers can send short messages known existing within a shared or public cloud (for
as datagrams to one another. example, the Intercloud). Also known as a
UFA — UNIX File Attributes. virtual private network cloud.
UID — User Identifier within the UNIX security VLL — Virtual Logical Volume Image/Logical
model. Unit Number.
UPS — Uninterruptible Power Supply — A power VLUN — Virtual LUN. Customized volume. Size
supply that includes a battery to maintain chosen by user.
power in the event of a power outage. VLVI — Virtual Logical Volume Image. Marketing
UR — Universal Replicator. name for CVS (custom volume size).
UUID — Universally Unique Identifier. VM — Virtual Machine.
-back to top- VMDK — Virtual Machine Disk file format.
—V— VNA — Vendor Neutral Archive.
vContinuum — Using the vContinuum wizard, VOJP — (Cache) Volatile Jumper.
users can push agents to primary and
VOLID — Volume ID.

Page G-24 HDS Confidential: For distribution only to authorized parties.


VOLSER — Volume Serial Numbers. WWNN — World Wide Node Name. A globally
Volume — A fixed amount of storage on a disk or unique 64-bit identifier assigned to each
tape. The term volume is often used as a Fibre Channel node process.
synonym for the storage medium itself, but WWPN ― World Wide Port Name. A globally
it is possible for a single disk to contain more unique 64-bit identifier assigned to each
than 1 volume or for a volume to span more Fibre Channel port. A Fibre Channel port’s
than 1 disk. WWPN is permitted to use any of several
VPC — Virtual Private Cloud. naming authorities. Fibre Channel specifies a
VSAM — Virtual Storage Access Method. Network Address Authority (NAA) to
distinguish between the various name
VSD — Virtual Storage Director. registration authorities that may be used to
VTL — Virtual Tape Library. identify the WWPN.
VSP — Virtual Storage Platform. -back to top-
VSS — (Microsoft) Volume Shadow Copy Service.
—X—
VTOC — Volume Table of Contents.
XAUI — "X"=10, AUI = Attachment Unit Interface.
VTOCIX — Volume Table of Contents Index.
VVDS — Virtual Volume Data Set. XCF — Cross System Communications Facility.

V-VOL — Virtual Volume. XDS — Cross Enterprise Document Sharing.


-back to top-
XDSi — Cross Enterprise Document Sharing for
—W— Imaging.

WAN — Wide Area Network. A computing XFI — Standard interface for connecting a 10Gb
internetwork that covers a broad area or Ethernet MAC device to XFP interface.
region. Contrast with PAN, LAN and MAN.
XFP — "X"=10Gb Small Form Factor Pluggable.
WDIR — Directory Name Object.
XML — eXtensible Markup Language.
WDIR — Working Directory.
XRC — Extended Remote Copy.
WDS — Working Data Set. -back to top-

WebDAV — Web-Based Distributed Authoring —Y—


and Versioning (HTTP extensions).
YB — Yottabyte.
WFILE — File Object or Working File.
Yottabyte — The highest-end measurement of
WFS — Working File Set. data at the present time. 1YB = 1,024ZB, or 1
quadrillion GB. A recent estimate (2011) is
WINS — Windows Internet Naming Service. that all the computer hard drives in the
WL — Wide Link. world do not contain 1YB of data.
-back to top-
WLM — Work Load Manager.
WORM — Write Once, Read Many.
—Z—
WSDL — Web Services Description Language. z/OS — z Operating System (IBM® S/390® or
WSRM — Write Seldom, Read Many. z/OS® Environments).
z/OS NFS — (System) z/OS Network File System.
WTREE — Directory Tree Object or Working Tree.
z/OSMF — (System) z/OS Management Facility.
WWN ― World Wide Name. A unique identifier
zAAP — (System) z Application Assist Processor
for an open-system host. It consists of a 64-
(for Java and XML workloads).
bit physical address (the IEEE 48-bit format
with a 12-bit extension and a 4-bit prefix).

HDS Confidential: For distribution only to authorized parties. Page G-25


ZCF — Zero Copy Failover. Also known as Data
Access Path (DAP).
Zettabyte (ZB) — A high-end measurement of
data. 1ZB = 1,024EB.
zFS — (System) zSeries File System.
zHPF — (System) z High Performance FICON.
zIIP — (System) z Integrated Information
Processor (specialty processor for database).
Zone — A collection of Fibre Channel Ports that
are permitted to communicate with each
other via the fabric.
Zoning — A method of subdividing a storage area
network into disjoint zones, or subsets of
nodes on the network. Storage area network
nodes outside a zone are invisible to nodes
within the zone. Moreover, with switched
SANs, traffic within each zone may be
physically isolated from traffic outside the
zone.
-back to top-

Page G-26 HDS Confidential: For distribution only to authorized parties.


Evaluating This Course
Please use the online evaluation system to help improve our
courses.

Learning Center Sign-in location:

https://learningcenter.hds.com/Saba/Web/Main

Page E-1
Evaluating This Course

Page E-2

You might also like