0% found this document useful (0 votes)
160 views

IBM Spectrum Protect Node Replication: Disclaimer

replication

Uploaded by

shyam_rt
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
160 views

IBM Spectrum Protect Node Replication: Disclaimer

replication

Uploaded by

shyam_rt
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

23.10.

2015

IBM Spectrum Protect


Node Replication

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

Disclaimer

IBM’s statements regarding its plans, directions, and intent are subject to change or
withdrawal without notice at IBM’s sole discretion.

Information regarding potential future products is intended to outline our general


product direction and it should not be relied on in making a purchasing decision.

The information mentioned regarding potential future products is not a commitment,


promise, or legal obligation to deliver any material, code or functionality. Information
about potential future products may not be incorporated into any contract. The
development, release, and timing of any future features or functionality described
for our products remains at our sole discretion.

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

1
23.10.2015

Agenda

• Overview
• Preparing for Replication
• Performing a Replication
• Best Practices
• 7.1.3 Enhancements
• Future Enhancements

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

Agenda

Overview
• Preparing for Replication
• Performing a Replication
• Best Practices
• 7.1.3 Enhancements
• Future Enhancements

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

2
23.10.2015

What is node replication

1. Initial replication – all objects are copied to th target server


• Backup, Archive and Space Management Objects
2. Deleted objects are deleted from target server
3. Modified objects are updated on the target server
4. Newly stored objects are copied during next replication

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

Node replication for disaster recovery

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

3
23.10.2015

Node replication for branch office

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

Node replication advantages

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

4
23.10.2015

Characteristics

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

TSM 7.1 Automatic client redirection

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

5
23.10.2015

TSM 7.1.1 Recovery of damaged files from target

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

Recovery of damaged files

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

6
23.10.2015

TSM 7.1.1 Dissimilar policies

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

Dissimilar policies

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

7
23.10.2015

Replication and Deduplication

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

Reconsile processing
● Prior to TSM 7.1.1, replication has always done a reconcile
– Compares complete list of files between the source and target server
– Used to synchronize the source and target servers
● Reconcile in TSM 7.1.1- Examines entire list of files in a file space (much like pre 7.1.1)
– Used during the initial replication between 7.1.1 servers
● Once reconcile completes, change tracking processing takes over during the next replication
– Restartable – remembers where it left off if cancelled or after some catastrophic server event
– Automatically runs following a database restore on the source or target server
– Can run manually using REPLICATE NODE FORCERECONCILE=NO|YES
• Synchronize source/targert files - used like an audit
● Change Tracking in TSM 7.1.1 eliminates need to query target server for its list of files
– New and change files are assigned a change identifier, when it is stored and when metadata is
updated
– Replication only processes files with a change identifier – incremental replication
– Replication picks up where the last replication left off
– Improves performance for fs with lots of files

● Showed a 2-3x improvement. ~200 GB/hr → ~500 GB/hr

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

8
23.10.2015

File deletion processing

• Processing of files deleted on the source server (prior to 7.1.1)


• With current implementation, files that have been deleted on the source server
are deleted on the target server during replication
• Locking issues can cause delays, especially for deduplicated files
• Processing of files deleted on the source server (7.1.1)
• During replication, source server sends list of files that have been deleted on
source (does not include expired files if dissimilar policies enabled)
• During replication, target server updates its database to indicate deleted files
• Expiration processing deletes indicated files on the target, outside the
replication window
• Processing of files deleted on the target server (7.1.1)
• During replication, target server sends list of files that have been explicitly
deleted on the target
• During replication, source server resends explicitly deleted files to the target

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

Agenda

• Overview
Preparing for Replication
• Performing a Replication
• Best Practices
• 7.1.3 Enhancements
• Future Enhancements

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

9
23.10.2015

Hardware requirements

• CPU/RAM minimum recommendations


• With deduplication, 8 CPU cores, 64GB RAM
• Best practice: 8 CPU cores, 128GB RAM
• Without deduplication, 4 CPU cores, 32GB RAM
• Best practice: 4 CPU cores, 64GB RAM
• Best practices assume complete server replication
• Requirements less if replicating less

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

Log and DB requirements

• Active Log
• At least 64GB Active Log
• Reconcile changes in TSM 7.1.1 greatly reduced log requirements
• Database
• A 300GB DB on a source server will require an additional 300GB of DB on the
target server
• In addition to current size of target DB
• Plan DB size and growth appropriately

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

10
23.10.2015

Tasks

• Create/Verify the server definitions


• Set the replication target for the server
• Determine which nodes, file spaces, and data types are to be replicated
• Assign appropriate rules, or use defaults
• Enable replication for the nodes
• Determine whether dissimilar policies will be used
• Replicate

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

Populate target server

Two basic methods to populate target server

• Method 1 – Replicate from scratch


• Best if source and target are in close proximity
• All eligible data is sent
•Could take a long time
• Method 2 – Synchronize and Replicate
• Best for large distances or if bandwidth is limited
• Use media-based Export/Import to populate target
• Replication with SYNC links the source and target objects

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

11
23.10.2015

Replication terms

Mode
• The replication mode indicates the role for a node (source/target)
• Normal modes
• SEND – the node is the source of replication
• RECEIVE – the node is the target of replication
• Cannot be set directly
• SYNC modes
• SYNCSEND – the node is a synced source
• SYNCRECEIVE – the node is a synced target
State
• The replication state indicates whether replication is enabled
• Used to temporarily stop replicating

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

Policies

• Replication does not replicate the policy


• Use EXPORT/IMPORT or Enterprise Configuration
• If using like policies, you should ensure the policies on each server are
the same, important for the case the replication is disabled
• If a policy construct is missing on the target server, the default construct is used
• If using dissimilar policies, you must
• Validate the policies with the command VALIDATE REPLPOLICY
• Enable the function with the command SET DISSIMILARPOLICIES

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

12
23.10.2015

Restrictions on target server

• A replicated node is Read-Only


• Cannot store new data from a client or application
• Cannot rename the node
• Data can be deleted from target with:
DELETE VOL DISCARDD=YES
AUDIT VOL FIX=YES
DELETE FILESPACE
• Data deleted from target will be sent during next replication

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

Removing replication

• REMOVE REPLNODE <nodename>


• Deletes all replication information from the DB
• Can be run on source, target, or both
• Sets the REPLSTATE and REPLMODE to NONE
• Does not delete any data

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

13
23.10.2015

Planning

• Need to plan for the daily change rate


• Are your RAM, CPU, and disks sufficient?
• How much data needs to be initially replicated to get to the steady state?
• Do you have the time and bandwidth to replicate it from scratch?
• Would it be better to use Export/Import?

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

Agenda

• Overview
• Preparing for Replication
Performing a Replication
• Best Practices
• 7.1.3 Enhancements
• Future Enhancements

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

14
23.10.2015

Performing a replication

• The REPLICATE NODE command accepts:


• Multiple nodes and/or node groups
• Specific file spaces belonging to a node
• The data type to replicate
• The priorities to include in the replication
• REPLICATE NODE starts a single process
• Process ends when ALL nodes and file spaces are complete
• Can be scheduled as part of daily maintenance

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

Replication processing

• Each node and file space specified is examined


• Source and target exchange information
• For each node being processed
• Target node is registered, if necessary
• Target file spaces are created, if necessary
• Replication State and Mode are verified
• Verify the node and/or file space is enabled for replication
• Verify source server is in SEND mode, and the target is in RECEIVE mode
• Target node is synchronized to source
• Attributes, including passwords, are replicated

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

15
23.10.2015

Agenda

• Overview
• Preparing for Replication
• Performing a Replication
Best Practices
• 7.1.3 Enhancements
• Future Enhancements

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

Best practices (maintenance plan)

• If not using container pools (7.1.3)


• Allow sufficient time for IDENTIFY to process all data before replicating
• Allows replication to benefit from deduplication
• Replicate the nodes
• If migrating data to tape, wait for replication to finish before migrating
• Expire the inventory
• If migrating from disk to tape with autocopy, migrate the storage pools
• Back up the storage pools
• Reclaim the storage pools

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

16
23.10.2015

Best practices

• Be sure to test replication throughput


• Adjust MAXSESSIONS
• Network, CPU, and RAM will impact throughput
• Make sure sufficient mount points are available for replication
• For FILE device class, set mount limit to at least the product of
NUMOPENVOLSALLOWED and MAXSESSIONS
• Don’t run all nodes in a single replication
• Replicate nodes with large number of objects by themselves
• With a smaller value for MAXSESSIONS (1 - 3)

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

Best practices

• More sessions doesn’t necessarily mean better performance


• It usually does, but lock contention on the target server can slow it down
• Replication batches have 2 phases:
• Phase 1: Sending the data to the target server for the new objects
• Phase 2: Updating the database with the new objects
• File space locks on the target server occur in phase 2
• When the amount of time spent in phase 1 is large relative to the time spent in
phase 2, use more sessions
• When the amount of time spent in phase 1 is relatively small, use fewer
sessions
• Increased distances will add time to phase 1 without affecting phase 2
• Generally speaking, this will benefit from having more sessions

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

17
23.10.2015

Best practices

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

Agenda

• Overview
• Preparing for Replication
• Performing a Replication
• Best Practices
7.1.3 Enhancements
• Future Enhancements

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

18
23.10.2015

7.1.3 Container pool overview

Container storagepools
– Storage is handled in an automatic fashion such that no direct management of the storage is required
• Philosophy - write once and don’t fuss with it
• NO reclamation, migration, copies, backups, shredding, LAN-free…..
• NO device classes or volumes like legacy random or sequential access storagepools
– Dynamic creation & deletion

Next generation deduplication (NextGen dedupe) uses container storagepools


– Allows for the deduplication of all data from both the client and server
• With the exception of files that have been encrypted on the client side
– Deduplication occurs in-line
• As opposed to legacy TSM server dedupe being done in a separate process following ingest

Goals of container and NextGen dedupe are to provide


– Easier management
– Faster performance
– More scalability

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

Container pools attributes

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

19
23.10.2015

Directory pool compared to File pool

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

Protecting data in container pools

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

20
23.10.2015

Node replication for directory based container pools

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

Storage pool protection (new function)

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

21
23.10.2015

Protect STGPOOL for directory based container pools

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

Repair storage pool

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

22
23.10.2015

Comparison of replication and protect stgpool

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

Agenda

• Overview
• Preparing for Replication
• Performing a Replication
• Best Practices
• 7.1.3 Enhancements
Future Enhancements

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

23
23.10.2015

2016: Unified replication

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

2016: Metadata-only node replication

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

24
23.10.2015

Future: Node replication with automatic failover/failback

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

Future: Always-on node replication

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

25
23.10.2015

Future: Node replication to multiple target servers

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

Future: Node replication with transparent client access

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

26
23.10.2015

Questions?

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

Thank You

22.10.2015 TSM Symposium Nachlese © 2015 IBM Corporation

27

You might also like