0% found this document useful (0 votes)
50 views

DS8000 Replication Performance Considerations

This document discusses several performance considerations and enhancements for DS8000 replication technologies: - It allows for multiple incremental FlashCopy operations on a single volume. - It improves performance of multi-target Metro Mirror configurations by balancing workload across links and resources. - The new PPRC synchronization design scales with volume size and supports priority-based copying. - Enhancements avoid collisions between Global Copy track transfers and host writes by releasing locks earlier.

Uploaded by

Richard Farfan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views

DS8000 Replication Performance Considerations

This document discusses several performance considerations and enhancements for DS8000 replication technologies: - It allows for multiple incremental FlashCopy operations on a single volume. - It improves performance of multi-target Metro Mirror configurations by balancing workload across links and resources. - The new PPRC synchronization design scales with volume size and supports priority-based copying. - Enhancements avoid collisions between Global Copy track transfers and host writes by releasing locks earlier.

Uploaded by

Richard Farfan
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 40

DS8000 Replication

Performance Considerations
Lisa Gundy
DFSMS Copy Services Architect
IBM Systems Division

Insert
Custom
Session
QR if
Desired.
Agenda

• Replication Review
• Multiple Incremental FlashCopy
• Multi-Target PPRC Performance
• PPRC Synchronization
• Global Copy Collision Enhancement
• zHyperWrite ©
• Workload Based z/OS Global Mirror
(XRC) Write Pacing
• Easy Tier Heat Map Transfer

05-Mar-15 3
DS8000 Replication Review
FlashCopy Metro Mirror Global Mirror Metro Global Mirror
z/OS Global Mirror Metro z/OS Global Mirror
Point in Time Synchronous
Copy Mirroring Asynchronous Three site and Four Site
Mirroring Synchronous &
Asynchronous Mirroring

Within the
same Primary Out of
Primary Metro distance Primary Out of Region
Storage Region
Site A Site B Site A Site B Site A
System Site C/D
Metro
Site B

© Copyright IBM Corporation 2014


Multiple Incremental FlashCopy

© Copyright IBM Corporation 2014


Multiple Incremental FlashCopy

• Previously only a single incremental FlashCopy


was allowed for any individual volume

• This provides the capability for up to 12


incremental FlashCopies for any volume

• A significant number of clients take two (or


more) FlashCopies per day for database backup S
both of which can now be incremental

• The Global Mirror journal FlashCopy also counts


as an incremental FlashCopy so the testing T1 T2 T12
copy can now also be incremental

• The functionality is also available as an RPQ


from R7.1.5
MultiTarget Metro Mirror
Performance

© Copyright IBM Corporation 2014


Multi-Target Metro Mirror
• Allow a single volumes to be the source for
more than one PPRC relationship

• Provide incremental resynchronization


functionality between target devices

• Use cases include


– Synchronous replication within a
Metro datacentre combined with another
Mirror metro distance synchronous
relationship
H1 H2 – Add another synchronous replication
for migration without interrupting
existing replication
– Allow multi-target Metro Global Mirror
as well as cascading for greater
Metro flexibility and simplified operational
Mirror scenarios
– Combine with cascading relationships
for 4-site topologies and migration
H3 scenarios

© Copyright IBM Corporation 2014


MultiTarget Metro Mirror Performance

4KB Writes

1.8

1.6

1.4 MultiTarget
Single
Response Time (ms)

1.2
Target
1

0.8 No Mirror
0.6

0.4

0.2

0
20,000 40,000 60,000 80,000 100,000 120,000 140,000 160,000 180,000
IOps

No Mirroring Single Metro Mirror Multi-target Metro Mirror

© Copyright IBM Corporation 2014


MultiTarget Metro Mirror Performance

27KB Writes

1.8
Single
1.6 Target
1.4
MultiTarget
Response Time (ms)

1.2 No Mirror
1

0.8

0.6

0.4

0.2

0
20,000 40,000 60,000 80,000 100,000 120,000 140,000
IOps

No Mirroring Single Metro Mirror Multi-target Metro Mirror

© Copyright IBM Corporation 2014


PPRC Synchronization

© Copyright IBM Corporation 2014


PPRC Synchronization
• The asynchronous copying of data
from a PPRC primary to a
secondary.

• Copies data that is out-of-sync H1


between primary and secondary
– Initial copy when a pair is
established or resumed
– Global Copy / Global Mirror to H2
asynchronously transfer updated
data

© Copyright IBM Corporation 2014


Pre-7.4 Design
• Volume based
– When a volume spans ranks, only the part on one rank
copied at a time

• Did not scale with volume size


– Resources allocated per volume, regardless of size

• No priority mechanism

• Unable to handle multiple relationships on a volume for


MultiTarget PPRC

© Copyright IBM Corporation 2014


Objectives
• Support MultiTarget PPRC

H2
• Finish the copy as quickly as
possible
– Fully utilize the PPRC links H1

• Minimize the impact on other work


– Do not overdrive the ranks on the
primary H3
– Minimize impact on host I/O

• Do the most important work first


– Priority scheme
© Copyright IBM Corporation 2014
New Design
• Balances workload across:
– PPRC Ports
– Extent Pools
– Device Adapters
– Ranks

• Assigns priorities
– For example, forming GM consistency groups >
Resynchronization

• Unit of work is an extent


– Scales with volume size

© Copyright IBM Corporation 2014


Global Copy Collision Avoidance

© Copyright IBM Corporation 2014


Global Copy Collision
• Collision definition:
– Track is locked for Global Copy
to transfer it to the secondary
– Host write occurs for same track.
1
0

H1 0
Global Copy H2
• Result:
0

– Host write must wait for Global


Copy transfer to complete
– Impact to application
Track in the process of being
sent is locked to prevent
writes from occurring
• Not usually a problem except for situations
with
– Have unstable networks
– Have high latency / long distance networks
– Have workloads with a high rate of data re-
reference (e.g. logging)
– Have very latency sensitive applications

© Copyright IBM Corporation 2014


Global Copy Collision Avoidance
• Global Copy releases track lock
after transfer of data to local host
adapter
1
0

• Allows Host Write to access track H1 0


0

1
Global Copy H2
immediately without waiting for 0

Global Copy transfer to complete

• Global Copy detects when track


has been modified by another
host write

• Available with R7.4 and as RPQ


on R7.2 and R6.3

© Copyright IBM Corporation 2014


IBM zHyperWrite

© Copyright IBM Corporation 2014


zHyperWrite
• Improved DB2 Log Write
Performance with DS8870 Metro
Mirror
DB2
– Reduces latency overhead
Data UCB
compared to normal storage
Log UCB Log UCB
based synchronous mirroring

• Reduced write latency and


improved log throughput Metro Mirror

P S

© Copyright IBM Corporation 2014


DB2 Log Write with Metro Mirror
1. DB2 Log Write to Metro Mirror
Primary

DB2
2. Write Mirrored to Secondary

3. Write Acknowledged to Primary ACK

4. Write Acknowledged to DB2


Metro Mirror

P S

ACK

© Copyright IBM Corporation 2014



Write with zHyperWrite
1. DB2 Log Write to Metro Mirror
Primary and Secondary in parallel

DB2
2. Writes Acknowledged to DB2

3. Metro Mirror does not mirror the


ACK ACK
data.

Metro Mirror

P S

© Copyright IBM Corporation 2014



IBM zHyperWrite
• Supports HyperSwap with TPC-R
or GDPS
DB2
• Enabled through Data UCB
– SYS1.PARMLIB(IECIOSxx) Log UCB Log UCB

– SETIOS command
– DS8870 R7.4, IOS, DFSMS PTF’s

Metro Mirror

P S

© Copyright IBM Corporation 2014


z/OS (XRC) Global Mirror
Workload Based Write Pacing

© Copyright IBM Corporation 2014


z/GM (XRC) Workload Based Write Pacing
• Need for Write Pacing
• Current Write Pacing
• Limitations of Current Write Pacing
• Requirements
• Use of Workload Manager (WLM)
• Example
• Implementation Requirements

© Copyright IBM Corporation 2014


z/GM System

SDM
Modified Data read
by SDM Consistency Groups
Application created
Writes
Data Journaled and
Written to secondary

P Data Buffered S
in Sidefile

© Copyright IBM Corporation 2014


Need for Write Pacing
• Write data is buffered in the DS8000
sidefiles
– Maximum sidefile size is finite

• Burst write rates can exceed capacity


to offload data
– Sidefiles grow
– RPO increases
– Possible suspension if persists

• Write Pacing monitors sidefile size


and injects delays to flatten out peaks
of the write rate
© Copyright IBM Corporation 2014
Previous XRC Write Pacing
• Volume based
– Sidefile count monitored for each volume

• Thresholds and Maximum Delay are specified for each volume


– Different volumes may have different values

• If the sidefile count for a volume grows:


– Delays injected for writes to that volume
– Delay starts very small
• Delay increased if sidefile count increases, up to maximum
allowed
• Delay reduced if sidefile count decreases

© Copyright IBM Corporation 2014


Write Pacing Step Function
Delay / Level
Write Pacing Step at Threshold = 1000
100 ms
12

Max Level 10
25 ms
10

5 ms
8

1 ms
6
Max Level 5

0.2 ms
4

Max Level 2
0.04 ms
2
Max Level 1

0
0 200 400 600 800 1000 1200 1400

Sidefile Count
© Copyright IBM Corporation 2014
Limitations to Previous Write Pacing
• Different applications have different response time
requirements

• These requirements are currently met by:


– Assigning different pacing threshold and limits to different
volumes
– Placing data on volumes with the appropriate pacing levels

• Requires significant planning for data placement

• If requirements change, data must be moved to different


volume

© Copyright IBM Corporation 2014


Write Pacing Requirements
• Meet application response time and performance
objectives

• Maintain disaster recovery capability within desired


Recovery Point Objective (RPO)

• Minimize the amount of manual planning and intervention

• Automatically adapt to changing application needs

© Copyright IBM Corporation 2014


Workload Manager
• z/OS Workload Manager (WLM) provides ability to set performance
goals

• Applications with similar goals are grouped into Service Classes

• WLM assigns resources to maximize goal achievement

• One part of the resource management is that I/O has an


importance value
– Six importance values:
• 1 = Highest
• 5 = Lowest
• 6 = Discretionary (or default, when not part of a service class)

© Copyright IBM Corporation 2014


Workload Based z/GM Write Pacing
• Takes into account the I/O’s importance value from WLM
when determining the amount of pacing

• Each importance level is mapped to a Maximum Pacing


Level

• Pacing levels are set so that higher importance I/O is


paced less then lower importance I/O

© Copyright IBM Corporation 2014


Example with WL Based Pacing
• Given:
– Threshold level = 1000
– Sidefile count = 500
– Volume Pacing level = 8

Importance Workload Pacing Volume Pacing


Level Pacing Level Delay Delay
1 (high) 4 0.04ms 0.2ms

3 (med) 8 0.2ms 0.2ms

5 (low) 12 1.0ms 0.2ms

• Delay varies based on I/O’s importance

© Copyright IBM Corporation 2014


Implementation Requirements
• Configure WLM
• Define Workload Classes
• Enable IO Priority Management
• Determine maximum delay for each workload class
• Specify these values in the XRC PARMLIB

© Copyright IBM Corporation 2014


Easy Tier Heat Map Transfer

© Copyright IBM Corporation 2014


Easy Tier Heat Map – With PPRC

• Heat Map maintained at


both the primary and the
secondary

• But… I/O at the


secondary is different
from that at the primary
Replication

H1 H2

© Copyright IBM Corporation 2014



Easy Tier Heat Map Transfer
• Transfers Easy Tier Heat
Map information for a
volume

• Out of band software HMT software


implementation
Server

• TPC-R and GDPS support


as well as standalone utility
HMC HMC
Replication

H1 H2

© Copyright IBM Corporation 2014


Heat Map Transfer Measurement

Secondary w/
Easy Tier

Secondary w/
Heat Map
No Easy Tier Transfer Primary w/
Easy Tier

© Copyright IBM Corporation 2014


Easy Tier Heat Map Transfer
• GDPS/PPRC support available in
an SPE with GDPS 3.10 and Ksys
GDPS/GM support available with
GDPS
GDPS 3.11
Netview

• GDPS/XRC support is planned to


be released next HMT demon

• 3 and 4 site support planned by USS


combining the different functions

HMC HMC
Replication

H1 H2

© Copyright IBM Corporation 2014


Session Summary
• Replication Overview
• Multiple Incremental FlashCopy
• MultiTarget PPRC Performance
• PPRC Synchronization
• Global Copy Collision Enhancement
• zHyperWrite
• Workload Based z/OS Global Mirror Write Pacing
• Easy Tier Heat Map Transfer

© Copyright IBM Corporation 2014

You might also like