Snap Mirror Entire
Snap Mirror Entire
This enables you replicate the data from volume or qtree to the volume or qtree.
This requires a separate license.
Modes of Snapmirror
Asynchronous mode :-replicates the snapshots are specified intervals
Synchronous mode:- replicates the snapshots as soon as they are written to the source
Semi-synchronous mode:-The destination volume lags behind the source volume by 10 Sec
SnapMirror can be used either for Flex Volume and also Traditional Volume.
How it works
Files and options involved in snapmirror ( /etc/snapmirror.conf, snapmirror.access,/etc/snapmirror.allow)
It takes the snapshot copy of the source volume and copies to the destination in the read only mode.
Updates the changes in the destination in the incremental mode. As result you get an online read only
volume or qtree.
Application of snapmirror
Disaster recovery = You can make the destination writable so clients can access the data.
Disaster recovery testing = FlexClone technology in the destination without affecting the replication process
Data restoration = you can reverse the destination and source volume or qtree and sync the data.
Application testing = you can copy the data using the snapmirror so that source data is not disturbed.
Load Balancing = you can copy the data using the snapmirror and distribute the load.
Off - Loading the tape backup= you can take backup from the destination volume and offloading the source
utilization.
Access for remote users.
Synchronous SnapMirror
In this mode the data is written to the destination as soon as it is written to the sourcce filer
The data's can synced either between the systems or between the system in the active/active cluster either
via ip or fcp
We require separate license apart from the snapmirror license for the synchronous transfer of data.
This can be used only in the case of the volume not with the qtree's and that two the volume should of same
type.As there are two types of volume they are Traditional volume and also FlexVol Volume.
sync
semi-sync ( This is used for balancing between the sync and also asynchronous mode of transfer)
you cannot set both sync and semi-sync relationship between filers of active/active configuration, and the
definition has to be done in the /etc/snapmirror.conf file.
Sync = The source acknowledges the client only after the write opertation is done at both the source and
also destination and recovery point objective is almost 0 seconds.
Semi-sync = The source acknowledges the client write operation immediately after it is written in the
source.The data is written to the destination with time lag of 10 sec. This implies the recovery point objective is 10
sec which means the data of only 10 sec would lost at the tiime recovery. Adv of it is that the performance compared
to the sync option.
If neither of the two option are not specified then the it is set to asynchronous.
Syntax :- systemA:volA systemB:volB – semi-sync
How Sync Works:-Before the data is written to the disk the data's is written to the NVRAM and at the
consistency point the data is transferred to the disk. In the case synchronous when the data's are written to the
NVRAM the source sends the data to the destination NVRAM's and at the consistency point the source asks the
destination to write the data to the disk and source also writes the data to the disk, After this source waits for the
acknowledgement from the destination before it starts the next write operation
How it handles network issues :- If there is any network issues then the snapmirror goes into the
asynchronous mode.Snapmirror follows the steps in the case network issues
In asynchronous mode the source tries to communicate the destination with the time interval of one minute.
After the connection is re- established then the source replicates the destination asynchronously to the
destination.
Snapmirror then gradually transitions replication mode from the asynchronous to synchronous mode.
The transition is possible only in the case of latest common snapshot is available otherwise the need to
break the snap(snapmirror break) and then resync it with (snapmirror resync).
This can also be overcomed with the option “replication.volume.use_auto_resync on”.The default vaule of it
is off.
Things to consider before growing aggregate with synchronous snapmirror destination volume.
Tuning snapmirror :- options snapmirror.enable on (This persist even after the reboot)
Pre-requisites
Need to enable the license at the filers where we are going to use the snapmirror.
Snapmirror volume replication requires the destination volume to be in the restricted mode and needs to be
created also.
The destination filer for the snapmirror volume replication needs to be same version or the later one
comparing to the source.If you use it for DR then both filer should be off same version .
Restrictions
In QSM the destination needs to be more than 5% more than the source qtree consumes.
In VSM the destination volume cannot be the root volume but the source volue can root volume.
In QSM. Destination qtree can be on the root that is /etc but cannot be /etc qtree.
Do not delete the snapshot copies that the snapmirror creates in the source before copying to the
destination.The newely created snapshot is called NCS(Newely created snapshot copy).Incremental copy depends
upon the NCS.
Don't use “snapmirror release or snapmirror break” in the destination less you don't require the incremental
changes from the source.
Recommenations of SnapMirror
The schedule time of snapmirror and also snapshot should not happen at the same time.Then the snapshot
would fail saying in the log file that already a snapshot copy is available.
If the source and destination are FlexVol Volume then there is no problem with the RAID configuation.
In the case deduplication, The deduplication metadata is kept in the aggregate level outside of the volume,
so metadata is not replicated along with the volume so in the detination the deduplication needs to be started using
the “sis start -s “( Without -s options only newly written data is scanned for the deduplication)
The directories needs to be unicode format. This can be done with the following options “vol options
vol_name convert_ucode on”
Deployment of snapMirror
It consists of source volume and qrees, and destination volume and qtree .
Source volume or qtree's :-These are data objects that are need to synchronized or replicated.Normally
users access these data's object and have read write access to the data's.
Destination volume or qtree's:- These are data object where the data is synchronized . Destination volume or
qtree are only read only. The destination volume needs to be writable if we use QSM
The volumes can be used for both traditional and FlexVol Volumes.
VSM
1. This supported only same kind of volume either traditional volume or FlexVol Volume.
QSM
2. It supports different kind volumes. i.e traditional and also FlexVol Volume.
SnapMirror Deployment variations
Cascading destination variables:- Writable source is being replicated to the multiple read-only destination
volumes. It is supported only for VSM not for QSM.
Migration of traditional volume to FlexVol Volume:-Only QSM can be used in this case because the VSM
does not support the replication volume on different volume types.
SnapMirror commands
Snapmirror on ( Use to enable the snapmirror, Between snapmirror on and snapmirror off command you
need wait for 60 seconds for proper transfer of controls)
To enable you can also use options snapmirror.enable on
vol create and vol restrict (To create destination volume snapmirror)
snapmirror initialize (Used to start the initial transfer of snapshot(Baseline copy) from the source to
destination .
snapmirror status (view the status of the snapmirror)
snapmirror update (Manually updating the snapmirror destination )
snapmirror quiesce (Stabilize the contents of the destination volume before snapshot is taken, and allowing
the active snapmirror transfer to finish and temporarily preventing the new transfers).
Snapmirror resume (Resume the normal transfer after quiesce).
Snapmirror abort (Stopping the active snapmirror)
Snapmirror break (Used to break the relationship between the source and the destination volume and
convert the volume in writable volume or qtree)
Snapmirror resync (Reestablish the relationship between the source and destination volume , This command
is generally issued after the “snapmirror break” so that we avoid the initial transfer which is the baseline
copy)
Snapmirror release (The snapshot copies are deleted)
Snapmirror off (Turn off the snapmirror functionality) alternatively options snapmirror.enable off
Snapmirror store and snapmirror use(Copy the volume to the local tape, and continue the same on
subsequent tapes).
Snapmirror retrive and snapmirror use (Initial or restore the volume from the local tape)
Snapmirror destination (Used in the case of cascading the snapmirror destination )
Snapmirror options
Snapmirror files:-
/etc/snapmirror.conf ( They are used to specify the relationship between the source and the destination
along with the following
1. Snapmirror update schedule for relationship
2. Type of relationship (single, multipath or failover)
3. Other options
/etc/snapmirror.allow ( Specifies the snapmirror destination that are allowed to copy from the system)
/etc/log/snapmirror ( Latest the snapmirror and the older ones are snapmirror.0 snapmirror.1)
/etc/hosts( Used for resolving the host names)
STEPS:-
Data replication:-
Destination data can be replicated to the other location which means it acts as the source for some filers.
This is called the cascading.
These are two hop cascading scenario’s
These are three hop cascading scenario’s, here the first synchronous relationship is supported only in the
7.1.2 and 7.2.1 releases.
In cascading you use the “snapmirror destinations” commands to check the number destination for a
particular volume
“snapmirror destinations” also displays commands related to the “vol clone” and also “dump” command
#snapmirror destinations –s [volume_name] ( -s lists the number of snapshot copies that are available for
the volume)
The baseline copy is transferred via tape and then incremental happens via snapshot copies i.e
snapmirror.
Steps followed
1. Snapmirror store and snapmirror use ( On the source system)
Snapmirror retrieve –g vol_name ( This returns value as 7000*10 where 10
means number of disks and 7000 means the blocks)(Provide the command
both in the source and also destination volume its better it matches if it does
not please note it down in both the systems)
Snapmirror store –g [disk_geomen] vol_name dest_tapedrives
Eg: snapmirror store –g 7000*10,1400*10 vol3 nrst0a,rstla
Snapmirror use dest_tapedrives(tape drives) tape_drive(This is drive holding
the new tape)
2. Transport the tapes physically
3. Snapmirror retrieve and snapmirror use
Snapmirror retrieve dest_volume tape_drive
Snapmirror use volume tape_list(Tape device from where you are restoring)
4. Snapmirror update or scheduling for the snapmirror schedule in the /etc/snapmirror.cong
Snapmirror update [-k n] –S source_system:source_volume
destination_system:destination_volume( k n is similar to kbs)
5. Snapmirror release ( For releasing the relationship b/w the tape and the source)
Snapmirror status
Snapmirror release vol1 snapmirror_tape
Initializing the SnapMirror destination
“Snapmirror initialize” used for intial transfer of information between source and the destination. This
is called initializing the destination.
Destination qtree quota should be within the range of data available in the source.
In QSM , the language of the volumes should be same. Avoid renaming of the volume or qtree after
relationship.
Initialization to and from tapes :- “snapmirror store” and “snapmirror retrieve” but only for volumes
not for qtree’s
The destination volume need to be in the restricted mode using the “vol restrict vol_name”
Snapmirror initialize [options] [dest_system:] {dest_volume|qtree_path}
Options
-k n = Maximum transfer speed.
-S source_system:source_volume|source qtree_path ( The source should match the
one in the /etc/snapmirror.conf)
-c snapshot_name creates the snapshot copy in the destination system
-s snapshot_name specifies the snapshot name in the source system, so transfer
happens with the help of the already created snapshot’s and no other new snapshot’s
are created while transferring.
systemB>snapmirror initialize –S systemA:vol0 systemB:vol2
systemB>snapmirror initialize –S systemA:/vol/vol1/qtree4 systemB:/vol/vol2/Destn_qtree
You can reserve space on the aggregate for the destination volume. This can be viewed with the vol
options vol_name guarantee {none | file | volume } and the default is the guarantee=volume.
Non-qtree data are data’s that are not available within the qtree. In the following we are going to sync
the data between the volume and the qtree
Snapmirror initialize –S source_system:/vol/vol3/- dest_system:/vol/vol2/non_qtree_data (Here (-)
represents the non-qtree data and another important thing is that the qtree shouldn’t be available
before)
Vice versa is not possible
After running the snapmirror quiesce and snapmirror break on the destination, you can resume again
by providing the snapmirror resync
Snapmirror initialize copies the entire volume to the destination by creating the snapshot of the entire
volume. The destination volume is already created and in restricted mode.
After the transfer is completed it brings the volume in the read only mode and also online. During the
initial transfer of data the volume is marked invalid while executing the” vol status command” and it
becomes valid and online after the initial transfer.
In QSM, we don’t create the qtree and initialize command itself creates it. The volume where you
want it should be online. snapshot in the destination shows busy via command “snap list” until the
initial transfer.
After Initialization, In VSM files and snapshot copies are available both in the source and also
destination. In QSM files are available in the source and also destination.
To check initialization , In VSM “snapmirror status”, In QSM “qtree”
When we issue the initialize command snapmirror sets the vol options fs_size_fixed on. This makes
the file system to be equal on the systems
You can create a centralized snapmirror.allow file for the site and copy it to the all the systems. The
system would ignore the entries pertaining to the other systems.
Maximum of 1024 entries can be there in the /etc/snapmirror.conf. In the case of active/active
configurations the limit has to be shared between the both the systems.
You can edit the snapmirror.allow file only when there is no active transfer of files for the source and
destination relationship. It would take 2 minutes to take effect if the snapmirror is enabled or else it
would take once its snapmirror is enabled.
The destination qtree should not be created and the entire path of the qtree needs to be provided in it.
You can specify upto 254 qtree’s in a specific volume.
In the case on the non-qtree data the syntax would be
You might have more than one physical path’s between the snapmirror relationships. There are two
types they are
Static routes for different routes with different IP.
Using different subnets for different routes.
There are different ways of connecting the relationship Ethernet, Fibre Channel or Both.
There are two modes available
Multiplexing Mode
Fail Over Mode
Steps in implementing the Multipath
Compression for snapmirror relationship:-
Volume snapmirror transfer’s all the snapshot copies as the part of the snapmirro replication
Qtree SnapMirror, source and destination have only one snapmirror in common.
Steps
In QSM, We need to create the snapshot copy and move in onto the destination and the also new source.
This is unlike the VSM where every snapshot needs to be copied to the destination
DataOntap on the destination should be the same of the source or the later one
Edit the /etc/snapmirror.conf on the destination