SnapView For Navisphere Administrator's Guide 03.22 A08 PDF
SnapView For Navisphere Administrator's Guide 03.22 A08 PDF
for Navisphere
ADMINISTRATOR’S GUIDE
P/N 069001180
REV A08
EMC Corporation
Corporate Headquarters:
Hopkinton, MA 01748-9103
1-508-435-1000
www.EMC.com
Copyright © 2002 - 2006 EMC Corporation. All rights reserved.
Published May, 2006
EMC believes the information in this publication is accurate as of its publication date. The
information is subject to change without notice.
Use, copying, and distribution of any EMC software described in this publication requires an
applicable software license.
For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on
EMC.com.
All other trademarks used herein are the property of their respective owners.
Preface........................................................................................................................... vii
About this manual This manual describes the tasks for setting up, configuring, and
managing a storage system using EMC® SnapView™ software. Each
major section includes introductory information and a general
procedure for completing a task. This manual is not intended for use
during the actual setup, configuration, and management of storage
systems so the steps in the procedures purposely do not include
screen captures of the dialog boxes.
The introductory information and detailed steps for each procedure
appear in the SnapView online help so you have the complete
information available when you actually set up, configure, and
manage storage systems, should you require help.
Audience This guide is part of the SnapView product documentation set, and is
intended for use by customers and service providers who use EMC
Navisphere® Manager software to set up and manage SnapView
software.
Readers of this guide should be familiar with Navisphere Manager.
Organization This manual is organized as follows:
Chapter 1 Introduces the EMC SnapView software application,
including clones and snapshots. This chapter also lists
the configuration guidelines and the right-click menu
options available for SnapView in Navisphere
Manager.
Chapter 2 Describes the steps required for setting up clones and
snapshots.
Chapter 3 Describes the options available for using clones and
snapshots.
Chapter 4 Describes how to display and/or modify the
properties dialog boxes for each SnapView
component.
Chapter 5 Contains examples, from setting up clones and
snapshots to using them. Each example also contains
an illustrated overview that shows the main steps
outlined in the examples.
Appendix A Describes the SnapView terminology differences
between Navisphere Express and Navisphere
Manager.
Appendix B Describes what bad blocks are, how SnapView
handles them, and what you can do to correct them.
Appendix C Describes how to use SnapView with a Tru64 server.
Glossary Defines SnapView and other terms used in this guide.
Conventions used in EMC uses the following conventions for notes and cautions.
this guide
Note: A note presents information that is important, but not hazard-related.
! CAUTION
A caution contains information essential to avoid data loss or
damage to the system or equipment. The caution may apply to
hardware or software.
Typographical Conventions
This manual uses the following format conventions:
Finding current The most up-to-date information about the SnapView software is
information posted on the EMC Powerlink™ website. We recommend that you
download the latest information before you start the SnapView
software.
To access EMC Powerlink, use the following link:
http://Powerlink.EMC.com
After you log in, select Support > Document Library and find the
following:
◆ Release notes for SnapView and admsnap
◆ The latest version of this guide that is applicable to your software
revision. For information on FC4700-series storage systems, refer
to revision A07 of this guide.
◆ EMC Installation Roadmap for CX-Series, AX-Series and FC-Series
Storage Systems, which provides a checklist of the tasks that you
must complete to install your storage system in a storage area
network (SAN) or direct attach configuration.
Where to get help EMC support, product, and licensing information can be obtained as
follows.
Product information — For documentation, release notes, software
updates, or for information about EMC products, licensing, and
service, go to the EMC Powerlink website (registration required) at:
http://Powerlink.EMC.com
Your comments
Your suggestions will help us continue to improve the accuracy,
organization, and overall quality of the user publications. Please send
your opinion of this guide to:
[email protected]
This chapter introduces the EMC® SnapView™ software and its user
interface, as well as the two command line interfaces for it. The
command line interfaces include the server-based admsnap utility
and the EMC Navisphere® CLI interface.
Introduction to SnapView
SnapView is a storage-system-based software application that allows
you to create a copy of a LUN by using either clones or snapshots.
A clone is an actual copy of a LUN and takes time to create,
depending on the size of the source LUN. A snapshot is a virtual
point-in-time copy of a LUN and takes only seconds to create.
SnapView has the following important benefits:
◆ It allows full access to a point-in-time copy of your production
data with modest impact on performance and without modifying
the actual production data.
◆ For decision support or revision testing, it provides a coherent,
readable and writable copy of real production data.
◆ For backup, it practically eliminates the time that production data
spends offline or in hot backup mode, and it offloads the backup
overhead from the production server to another server.
◆ It provides a consistent replica across a set of LUNs. You can do
this by performing a consistent fracture, which is a fracture of
more than one clone at the same time, or a fracture that you create
when starting a session in consistent mode.
◆ It provides instantaneous data recovery if the source LUN
becomes corrupt. You can perform a recovery operation on a
clone by initiating a reverse synchronization and on a snapshot
session by initiating a rollback operation.
Depending on your application needs, you can create clones,
snapshots, or snapshots of clones. For a detailed overview of clones,
refer to the “Clones overview” (see page 1-3). For a detailed overview
on snapshots, refer to the “Snapshots overview” (see page 1-5). For a
comparison of using clones, snapshots, and snapshots of clones, refer
to Table 1-1, “A comparison of clones and snapshots,” on page 1-7.
Clones overview
A clone is a complete copy of a source LUN. You specify a source
LUN when you create a clone group. The copy of the source LUN
begins when you add a clone LUN to the clone group. The software
assigns each clone a clone ID. This ID remains with the clone until
you remove the clone from its group.
While the clone is part of the clone group and unfractured, any server
write requests made to the source LUN are simultaneously copied to
the clone. Once the clone contains the desired data, you can fracture
the clone. Fracturing the clone breaks it from its source LUN, after
which you can make it available to a secondary server.
Clone private LUNs record information that identifies data chunks
on the source LUN and clone LUN that have been modified after you
fractured the clone. A modified (changed) data chunk is a chunk of
data that a production or secondary server changes by writing to the
source LUN or clone. A log in the clone private LUN records this
information, but no actual data is written to the clone private LUN.
This log reduces the time it takes to synchronize or reverse
synchronize a clone and its source LUN since the software copies
only modified chunks.
Writes made to the source LUN from the production server are
copied to the clone only when you manually perform a
synchronization, which unfractures the clone and updates the
contents on the clone with its source.
Figure 1-1 shows an example of how a fractured clone works. Note,
as the production server writes to the source LUN, and the secondary
server writes to the clone, the clone private LUN tracks areas on the
source and clone that have changed since the clone was fractured.
Production Secondary
Server Server
Continuous I/O
Storage System
Source Fractured
LUN Clone LUN
Clone
Private
LUN
EMC2438
Snapshots overview
A snapshot is a virtual LUN that allows a secondary server to view a
point-in-time copy of a source LUN. You determine the point in time
when you start a SnapView session. The session keeps track of the
source LUN’s data at a particular point in time.
During a session, the production server can still write to the source
LUN and modify data. When this happens, the software stores a copy
of the original point-in-time data on a reserved LUN in the SP’s
reserved LUN pool. This operation is referred to as
copy-on-first-write because it occurs only when a data chunk is first
modified on the source LUN.
As the session continues and additional I/O modifies other data
chunks on the source LUN, the amount of data stored in the reserved
LUN pool grows. If needed, you can increase the size of the reserved
LUN pool by adding more LUNs to the LUN pool.
Production Secondary
Server Server
Continuous
I/O
Storage System
LUN Pool
Reserved
Source
LUN
Snapshot
EMC2764
Benefits
• Provides immediacy in replacing the • Provides immediacy in replacing the • Provides immediacy in replacing the
contents of the source LUN with the contents of the source LUN with the contents of the source LUN with the
contents of the clone LUN and contents of the session, should the contents of the session, should the
redirecting servers from the source to source LUN become corrupted. source LUN become corrupted.
the clone, should the source become • Makes backup operation • Makes backup operation
corrupted. nondisruptive. nondisruptive.
• Makes backup operation • Provides a quick and instant copy • Provides an extra level of protection
nondisruptive. because it is a virtual LUN. against critical data loss if both the
• Provides enhanced protection against source LUN and clone LUN become
critical data loss because it is an corrupted.
actual LUN.
Creation time
Uses the same amount of disk space as Uses reserved LUN pool space, which is Uses reserved LUN pool space (for the
the source LUN. usually 10% to 20% of the source LUN snapshot) and full disk space (for the
size per session, but will vary depending clone), which usually totals 100% of the
on how much data has changed on the source LUN size for clones and 10% to
source LUN. 20% of the source LUN size per session,
but will vary depending on how much data
has changed on the source LUN.
Instantaneous after initializing a reverse Instantaneous after initializing a rollback Combination of rollback from session and
synchronization. operation. reverse-synchronization of clone.
• There is no performance impact when A performance decrease due to the Combination of both clone LUNs and
a clone LUN is in a fractured state. copy-on-first-write. snapshot LUNs.
• For the initial synchronization of the
clone LUN, there is a performance
impact for the duration of the
synchronization. Subsequent
synchronizations or reverse
synchronizations have comparable
impact but the duration of the
synchronization will be shorter since it
is incremental. Impact is also
determined by the synchronization
rate, which is set when the clone LUN
is added to the clone group and can
be changed during a synchronization
or reverse synchronization.
SnapView components
SnapView consists of the following software components:
◆ A set of drivers that provides the SnapView functionality, and
resides on the storage system with the LUNs you want to copy.
Note: All CX-series storage systems ship from the factory with SnapView
software installed, but not enabled. To use the SnapView software
functionality, the SnapView enabler must be installed on the storage
system.
Navisphere Manager
Navisphere Manager is a centralized storage-system management
tool for configuring and managing CLARiiON® storage systems. It
provides the following basic functionality:
◆ Discovery of CLARiiON storage systems
◆ Storage configuration and allocation
◆ Status and configuration information display
◆ Event management
Navisphere Manager is a web-based user interface that lets you
securely manage CLARiiON storage systems locally on the same
LAN or remotely over the Internet, using a common browser.
Navisphere Manager resides on a CX-series storage system or a
Microsoft Windows Server 2003 or Windows 2000 server that is
running the Storage Management Server software, and is
downloaded to the browser when the Storage Management Server
software is accessed.
Navisphere CLI The Navisphere CLI provides another management interface (along
with Navisphere Manager and admsnap) to clones and snapshots.
You can use Navisphere CLI commands and admsnap commands
together to manage clones and snapshots. You need both admsnap
and Navisphere CLI because admsnap interacts with the server
operating system and CLI interacts with the storage system.
For additional information on how to use Navisphere CLI for
SnapView and admsnap, refer to the EMC SnapView Command Line
Interface Reference.
admsnap The admsnap utility is an executable program that you can run
interactively or with a script to manage clones and snapshots. The
admsnap utility resides on the servers connected to the storage
system with the SnapView driver.
The admsnap utility runs on the following server platforms
◆ Hewlett Packard HP-UX
◆ IBM AIX (RS/6000 and RS/6000 SP servers)
◆ Linux (32-bit Intel platform, 64-bit AMD processor Linux, 64-bit
Intel Xeon processor, and 64-bit Intel Itanium processor)
Note: Separate admsnap installation packages are available for the 32-bit
Intel platform, 64-bit AMD processor Linux/64-bit Intel Xeon processor,
and the 64-bit Intel Itanium processor. The admsnap packages for the
64-bit AMD processor Linux and the 64-bit Intel Xeon processor are the
same. For minimum supported Linux kernel revisions for each platform,
refer to the Admsnap Release Notes.
◆ Novell NetWare
◆ SGI IRIX
◆ Sun Solaris
◆ VMware® ESX Server™
For the supported versions of these servers/operating systems, refer
to the most up-to-date release notes for SnapView and admsnap.
Event Monitor
Use the Event Monitor to monitor events specific to SnapView. The
Event Monitor is an enterprise tool that supports centralized or
distributed monitoring of storage systems in a heterogeneous
environment. The Event Monitor software consists of two distinct
parts: the Event Monitor User Interface (UI) and the Event Monitor.
The Event Monitor user interface (UI) is part of Navisphere Manager
and runs on the web browser. The user interface provides you with
an intuitive tool to set up responses for events and to choose which
storage systems to observe. The user interface lets you customize a
configuration to use any of the supported notification methods. You
can easil configure it to email, page, or send an SNMP trap to an
industry-standard event-management tool. The user interface need
only be used when setting up configurations or viewing the Event
History log.
Event Monitor resides on both Navisphere SP Agent and Host Agent
and is available on many operating systems. Once configured, the
Event Monitor runs continuously as a service or daemon, observing
the state of all specified storage systems and notifying you when
selected events occur.
To configure Event Monitor for SnapView, refer to the online help
Table of Contents entry, Monitoring and responding to events, or to the
EMC Navisphere Manager Administrator's Guide.
MirrorView
If a LUN is a MirrorView™ primary or secondary image, you cannot
create a clone group for that image. Similarly, if a LUN is a member of
a clone group as the source or clone, it cannot serve as a MirrorView
primary or secondary image.
If the MirrorView/Synchronous option is installed, you can create a
snapshot of the primary or secondary image. However, we
recommend that you take a snapshot of a mirror's secondary image
only if the image's state is either Synchronized or Consistent. If the
image is Synchronizing or Out-of-Sync, the snapshot's data will not
be useful.
If the MirrorView/Asynchronous option is installed, you can create a
snapshot of the primary or secondary image. However, we
recommend that you take a snapshot of a mirror's secondary image
only if the last update started has completed successfully. If the
update did not complete successfully, for example, the image
fractured or the update is still in progress, the snapshot's data will not
be useful.
SAN Copy
You can use SnapView with SAN Copy™ software to create a
snapshot or a clone of the destination LUN, so that the SnapView
replica can be put in the secondary server storage group, rather than
the SAN Copy destination. This allows the SAN Copy destination to
maintain consistency with its source, and be available on an ongoing
basis for incremental updates. Keep in mind that SAN Copy tracks
server writes to the SAN Copy source LUN (from the production
server); but SAN Copy does not track server writes to the SAN Copy
destination LUN (from the secondary server).
SnapView servers
SnapView requires at least two servers: one server (called the
production server) contains the LUN you want to copy, and another
server (called the secondary server) lets you view the clone or
snapshot. You can have multiple secondary servers.
The production server:
◆ Runs the customer applications
◆ Owns the source LUN
The secondary server (or any other server):
◆ Owns the clone or snapshot
◆ Reads from and writes to the fractured clone or activated
snapshot
◆ Performs secondary tasks using the clone or snapshot or an
independent analysis (such as, backup, decision support, or
revision testing)
Note: You can configure a clustered server to access a source LUN, but not
both the source LUN and its clone or snapshot. Only a server outside the
cluster can access the clone or snapshot.
NetWare, UNIX, or
Windows Client
Web Browser
(Manager GUI)
LAN
admsnap admsnap
Connection for Navisphere CLI Navisphere CLI
Manager Only
Fibre Channel or iSCSI Fibre Channel or iSCSI
Storage System
SnapVi ew Driver
SnapView SnapView
Source LUN 1 Snapshot of LUN 1 Secondary Server’s
Production Server’s
(Database Log) (Database Log) Storage Group
Storage Group
for Backup SnapView SnapView for Backup
Application Source LUN 2 Snapshot of LUN 2 Application
(Database Files) (Database Files)
LUN 3 LUN
LUN63
Production Server’s Secondary Server’s
SnapView SnapView
Storage Group Storage Group
Source LUN 7 Clone of LUN 7
for Application (Payroll Files) for Application
(Payroll Files)
Testing Testing
LUN 13
EMC2419
SnapView limits
Table 1-2 lists the maximum SnapView limits for your storage system.
Note: A metaLUN is a single entity; therefore it counts as one of your clone or
snapshot limits. For example, if a metaLUN is composed of five LUNs and you create a
clone of this LUN, it counts as one of your clone limits, not five. SnapView also
supports the new, larger LUNs that FLARE™ supports (refer to the FLARE release
notes).
Parameter CX700, CX3 series CX600 CX500 series CX400 CX300 AX100 AX150
model 80, or CX3 or CX3 series series
series model 40 model 20
Clones
Per storage system 200a 100a 100a 50a 100a Not Not
supported supported
Clone groups
Snapshotsb
SnapView sessionsb
Reserved LUNs
Container for the storage system SnapView > Start Start a point-in-time copy of a source LUN(s).
and all of its components. When SnapView Session
the storage system is working
normally, the software displays SnapView > SnapView Display the status of any snapshot and session
this icon. Summary for the selected storage system.
SnapView > Create Designate a source LUN that you want to clone at
Clone Group some time.
SnapView > Clone Allocate and deallocate clone private LUNs and
Feature Properties globally enable the Protected Restore feature.
Container for SP A’s or SP B’s Configure Add reserved LUNs to or remove them from an
reserved LUN pool. SP’s LUN pool.
SP A’s and SP B’s reserved LUN Properties Display the properties of SP A’s or SP B’s
pool consists of any reserved reserved LUN pool.
LUNs owned by the selected SP.
Table 1-3 SnapView basic storage tree icons: Images and descriptions
Clone group icon and container Add Clone Create a relationship with the source and LUN
for clone IDs. you are adding, which will become the clone LUN.
Destroy Clone Group Destroy the relationship between the source LUN
and the clone group.
Clone ID icon and container for Synchronize Update the clone LUN with the data on its source
clone LUNs. LUN.
Reverse Synchronize Replace the data on the source LUN with the data
on the clone.
Remove Break off the clone from its source LUN and
remove it from the clone group. The clone LUN
becomes a conventional LUN. Adding the clone
back to the clone group requires an initial
synchronization.
Table 1-3 SnapView basic storage tree icons: Images and descriptions
Container for snapshot LUNs and None List the snapshots and reserved snapshots for
the reserved snapshot container. other applications, such as for the SAN Copy or
Snapshot Names - individual MirrorView/Asynchronous software. This option is
snapshots. available only if the SAN Copy or
MirrorView/Asynchronous software is installed.
Note: You cannot perform any operations from
this icon.
SAN Copy > Create Create a full SAN Copy session from the selected
Session from LUN snapshot as the source LUN. This option is
available only if the SAN Copy software is
installed.
Container for all the reserved None List the snapshots reserved for other applications,
snapshots running on the storage such as for the SAN Copy or
system. MirrorView/Asynchronous software. This option is
Reserved Snapshots - individual available only if the SAN Copy or
snapshot reserved for another MirrorView/Asynchronous software is installed.
application. Note: You cannot perform any operations from
this icon.
Table 1-3 SnapView basic storage tree icons: Images and descriptions
Container for all SnapView None List the sessions and reserved sessions for other
sessions and the reserved applications, such as for the SAN Copy or
sessions container. This icon MirrorView/Asynchronous software. This option is
appears even when no sessions available only if the SAN Copy or
are active in the storage system. MirrorView/Asynchronous software is installed.
Sessions - individual SnapView Note: You cannot perform any operations from
sessions. this icon.
Start Rollback Replace the data on the source LUN with the data
on the SnapView session.
Container for all the reserved None List the sessions reserved for other applications,
sessions running on the storage such as for the incremental SAN Copy or
system. MirrorView/Asynchronous software. This option is
Reserved Sessions - individual available only if the incremental SAN Copy or
sessions reserved for another MirrorView/Asynchronous software is installed.
application. Note: You cannot perform any operations from
this icon.
LUN icon - LUNs in the storage SnapView > Create Create a virtual LUN that allows a secondary
group node and the SP node. Snapshot server to view a SnapView session.
SnapView > Create Create a clone group with the selected LUN as
Clone Group the source.
MetaLUN icon -Type of LUN SnapView > Create Create a virtual LUN that allows a secondary
whose capacity is the combined Snapshot server to view a SnapView session.
capacities of all the LUNs that
compose it. SnapView > Start Start a point-in-time copy of the selected source
SnapView Session LUN.
SnapView > Create Create a clone group with the selected LUN as
Clone Group the source.
Setting up SnapView
Setting up clones
◆ Prerequisites for setting up clones...................................................2-2
◆ Overview of setting up SnapView to use clones ...........................2-3
◆ Allocating clone private LUNs ........................................................2-4
◆ Deallocating/reallocating clone private LUNs .............................2-6
◆ Creating a clone group ......................................................................2-8
◆ Adding a clone to a clone group....................................................2-10
Setting up snapshots
◆ Prerequisites for setting up snapshots ..........................................2-13
◆ Overview of setting up SnapView to use snapshots...................2-15
◆ Reserved LUN pool with SnapView .............................................2-16
◆ Starting a SnapView session ...........................................................2-18
◆ Creating a snapshot .........................................................................2-25
◆ Adding a snapshot to a storage group..........................................2-28
◆ For a secondary server to access the clone LUN, the clone must
be assigned to a storage group (but you cannot read the clone
until you fracture it). The storage group must be connected to the
secondary server that will access the clone. You must assign the
clone LUN to a storage group other than the storage group that
holds the source LUN. EMC supports placing a clone in the same
storage group as its source LUN only if you use Replication
Manager or Replication Manager/SE to put the clone in the
storage group. This software provides same host access to the clone
and the source LUN. For information on using these software
products, refer to the documentation for the product.
If you have a VMware ESX Server, the clone and source LUNs
must be accessed by different virtual machines, unless the virtual
machine is running one of the software programs that support
same host access.
◆ Configure Event Monitor, if you want to be notified of
SnapView events. Event Monitor is part of the Navisphere Agent
and is available on many operating systems. Once configured, the
Event Monitor runs continuously as a service or daemon,
observing the state of all specified storage systems and notifying
you when selected events occur. To configure Event Monitor for
SnapView, refer to the online help Table of Contents entry,
Monitoring and responding to events, or to the EMC Navisphere
Manager Administrator's Guide.
Note: To learn about the possible clone states after you add a clone to a
clone group, refer to “Clone states” on page 3-2.
Note: You must allocate one clone private LUN for each SP before you can
create a clone group.
Note: You should bind clone private LUNs in a RAID group that normally
does not see heavy I/O.
Note: You do not specify which SP the clone private LUN is assigned to;
Navisphere does this for you.
Note: When you select the Allow Protected Restore option, the
SnapView driver automatically allocates 8 MB in additional memory per
SP. The additional memory is fixed and is used to copy the data from the
clone LUN to the source LUN in order to satisfy server write requests to
the source LUN. This additional memory counts against the total
memory budget for storage-system-based drivers.
7. Click OK, and then Yes to confirm the allocation of the selected
clone private LUNs.
Note: You do not specify which SP the clone private LUN is assigned to;
Navisphere does this for you.
What next? To continue setting up clones, continue to the next section to create a
clone group.
Note: If you have not allocated two clone private LUNs, you must allocate
them before you create a clone group.
Eligible LUNs Any source LUN is eligible to be cloned, except for the following:
◆ Hot spare LUNs
◆ Remote mirror LUNs (LUNs participating as either a primary or
secondary image)
◆ Clone LUNs (LUNs participating in any clone group as either a
source LUN or a clone LUN)
◆ Snapshot LUNs
◆ Private LUNs (LUNs reserved as clone private LUNs, in a
reserved LUN pool, or in a write intent log)
Note: You set the quiesce threshold on a per clone group basis. Any clone
you add to this clone group will retain this quiesce value. Valid values
are 10 – 3600 seconds. The default is 60 seconds.
If you created the clone group by right-clicking a source LUN icon, the
list includes only the selected source LUN.
7. Click OK to create the clone group, and then Yes to confirm the
creation of the clone group.
What next? Continue to the next section to add a clone to the clone group.
Note: For a Windows servers - You must delete all file entries in the recycling
bin of the source LUN before adding the clone to the clone group. If you do
not delete these entries, the clone you are adding will copy them byte for
byte.
Note: When you add a clone to the clone group, with the Initial Sync
Required property selected, the clone state is Synchronizing. The software
transitions the clone to Synchronized or Consistent state only after the initial
synchronization is complete.
Note: If the selected LUN has data and you selected the Initial Sync
Required option, SnapView will destroy the current contents of the
LUN.
If the clone LUN you are adding does not belong to the same SP as its
source LUN, the clone LUN will trespass over to the source LUN’s SP.
The Trespassed LUNs dialog box reports any LUNs that have different
default and current owners. To open this dialog bos, select Tools from
the main menu and click Trespassed LUNs Status.
Note: The Initial Sync Required is necessary unless your source LUN
does not contain any data, for example, if you bind a source LUN and
have not added it to a storage group. If you select Initial Sync Required
with this empty source LUN, resources are used to synchronize the
empty source LUN to the clone.
Note: EMC recommends that you do not use a High synchronization rate
on a storage system with a single SP.
8. Click Apply to add a clone to the clone group, and then click Yes
to confirm the addition of the clone. A Success: Add Clone dialog
box opens.
9. Click OK. The application places a Clone icon under the
associated Clone Group Name icon.
10. Wait for the synchronization to complete and verify that the clone
is in a Consistent or Synchronized state by right-clicking the
Clone Group Name icon that contains the clone you added; then
select Properties, and click the appropriate Clone tab.
To create multiple clones of this source LUN, repeat steps 2
through 10.
You cannot expand the capacity of a LUN or metaLUN that is
participating in a clone group until you remove the clone from the
clone group and destroy the clone group. Neither the production or
secondary server can access this added capacity until the expansion is
complete and you perform some additional operations. For detailed
information on expanding a LUN or metaLUN, see the online help or
the EMC Navisphere Manager Administrator's Guide.
What next? To start using the clone or for information on possible clone states,
refer to Chapter 3, “Using SnapView”.
Note: VMware ESX Servers must activate the snapshot before adding it to
a storage group.
Note: You must complete the prerequisites for setting up snapshots, as listed
on page 2-13, before you can perform any of the following procedures.
Note: You can create a snapshot before starting a session but the snapshot
has no use until you start a session on it. A secondary server can then
activate the snapshot to the session.
Note: With SnapView version 02.03.xxx (or higher), the snapshot cache is
referred to as the reserved LUN pool. The reserved LUN pool works with
SnapView in the same way as the snapshot cache. However, unlike the
snapshot cache, which was used solely for SnapView, the reserved LUN pool
shares its LUN resources with other applications such as SAN Copy and
MirrorView/Asynchronous. The only visible change in the Navisphere user
interface (UI) is in the tree structure. The reserved LUN pool is now
structured directly under the Storage System icon instead of the SnapView
icon.
The reserved LUN pool consists of one or more private LUNs and
works with SnapView sessions and snapshots. The reserved LUN
pool stores the original source LUN data chunks that have been
modified since the start of the session. For any one session, the
contents of a reserved LUN(s) and any unchanged source LUN(s)
blocks compose the snapshot.
Server writes made to an activated snapshot are also stored on a
reserved LUN in the SP’s LUN pool. When you deactivate the
snapshot, the reserved LUN space is freed and all server writes are
destroyed.
Each SP has its own reserved LUN pool, and before starting a session,
the reserved LUN pool must contain at least one LUN for each source
LUN that will be starting a session. You can add any LUNs that are
available to either SP’s reserved LUN pool. Each SP manages its own
LUN pool and assigns a separate reserved LUN (or multiple LUNs)
to each SnapView source LUN. Multiple sessions of a single source
LUN will share the same reserved LUN or LUNs.
If the reserved LUN fills up and the SP’s LUN pool has no additional
LUNs, the software automatically terminates the session that is trying
to allocate reserved LUN space, logs an error, releases the reserved
LUN(s) used by this session, and returns them to the SP's LUN pool.
The software also destroys all copy-on-first-write data stored in the
reserved LUN pool for that session. At this point, the snapshot
becomes inactive and any server that has mounted volumes on the
snapshot will receive I/O errors and lose access.
If you have multiple sessions of a single source LUN and the reserved
LUN fills up, when the production server modifies a chunk on the
source LUN, resulting in a copy-on-first-write, every session that has
the same chunk will be terminated if no additional LUNs are
available in the SP’s LUN pool. Other sessions that did not have this
chunk will continue to run and use the reserved LUN space that the
terminated sessions were using.
Note: Before starting a SnapView session, the SP of the source LUN(s) must
contain at least one free (unallocated) LUN in its reserved LUN pool.
Optional modes When you start a SnapView session, you can specify that the session
run in persistent and/or consistent mode.
Note: A SnapView session can run in both persistent and consistent mode.
For AX-series storage systems, persistent mode is always enabled and
consistent mode is not supported.
Note: For information on supported failover software for the storage system
you are managing, refer to the release notes for SnapView and admsnap.
Multiple sessions If you have an AX-series storage system, you can start only one
SnapView session per source LUN(s).
If you have a CX-series storage system, you can start up to eight
concurrent sessions per source LUN(s). However, each snapshot must
be activated to a different SnapView session and accessed by different
servers. For example, if you create three snapshots and start eight
sessions for a single source LUN, three different servers can activate
three of the snapshots to three different sessions.
The secondary server can also use the deactivate and activate
functions to change the focus of a snapshot from one session to
another. You must deactivate (unmap) a snapshot before you can
activate (map) it to another session. A secondary server cannot
activate to another session until the server deactivates from the
current session. Once the secondary server deactivates the snapshot
from the session, you can activate and deactivate between the
remaining unactive sessions.
Refer to Chapter 3 to activate or deactivate the snapshot.
Note: The eight-session limit includes SnapView sessions and any reserved
sessions used in another application such as SAN Copy and
MirrorView/Asynchronous.
With some operating systems, you may need to shut down the
application to flush the data. Specific operating systems have
different requirements.
2. From any client that is managing the storage system, in the
Enterprise Storage dialog box, click the Storage tab.
a. Navigate to the icon of the source LUN(s) on which you want
to start a session and select SnapView > Start Session.
b. Enter a unique name for the session. If the name you specify is
already assigned to another session, an error message appears.
What next? What you do next depends on whether you have created a snapshot
to map (activate) to this session.
◆ If you have not created a snapshot, continue to the next section.
◆ If you have created a snapshot but have not added the snapshot
to a storage group, go to “Adding a snapshot to a storage group”
on page 2-28.
Creating a snapshot
A snapshot is a virtual LUN and when activated, it allows a
secondary server to view a SnapView session. An active snapshot is a
composite of a source LUN and reserved LUN data that lasts until
you destroy the snapshot. You can create a snapshot before or after
you start a session; however, the snapshot has no use until a
secondary server activates it to a session.
If the storage system loses power while the session is running, and
the session is not running in persistent mode, the session is lost and
the snapshot becomes inactive. If the session is running in persistent
mode, both the session and snapshot would survive the storage-
system power failure.
Note: Unless you have additional software that supports same host access,
you must assign the snapshot to a storage group other than the storage group
that holds the source LUN(s). You also must assign multiple snapshots of the
same source LUN(s), to different storage groups. For information on software
that supports same host access, refer to the “Prerequisites for setting up
snapshots” on page 2-13.
Multiple snapshots If you have an AX-series storage system, you can create only one
snapshot per source LUN(s). If you have a CX-series storage system,
you can create up to eight snapshots per source LUN(s). However,
each snapshot must be activated to a different SnapView session and
accessed by different servers. For example, if you create three
snapshots and start eight sessions for a single source LUN, three
servers can each activate a snapshot to a different session. Once the
servers activate a snapshot to a session, the session is not available to
another snapshot until it is deactivated.
You can also use the deactivate and activate functions to change the
focus of a snapshot from one session to another. You must deactivate
(unmap) a snapshot before you can activate (map) it to another
session. For example, if you start eight sessions and create one
snapshot for a single source LUN, a secondary server cannot activate
to another session until the server deactivates from the current
session. Once the secondary server deactivates the snapshot from the
session, you can activate and deactivate between the remaining
sessions. Refer to Chapter 3 to activate or deactivate the snapshot.
To create a snapshot 1. From any client that is managing the storage system, in the
Enterprise Storage dialog box, click the Storage tab.
2. Navigate to the icon for the source LUN(s) on which you want to
create a snapshot.
3. Right-click the source LUN icon and select Create Snapshot.
Note: If the Create Snapshot option is unavailable, be sure that you have
not exceeded the eight-snapshot limit per source LUN or the snapshot
limit per storage system. For a complete list of snapshots on the
storage-system, right-click the storage system icon and select SnapView
> SnapView Summary. These limits include any reserved snapshots
used for another application such as SAN Copy and
MirrorView/Asynchronous.
Note: If you have a VMware ESX Server, do not select a storage group.
You must activate the snapshot before you add it to a storage group.
What next? If you have not started a session, go to “Starting a SnapView session”
on page 2-18.
To start using your snapshot, go to “Activating a snapshot” on
page 3-21.
Note: If the server that will have access to the snapshot is already connected
to a storage group, add the snapshot to that storage group. If you create a
new storage group for the snapshot and then connect the server to the new
storage group, the software removes the server from the original storage
group and it will no longer have access to the LUNs in that storage group.
Note: If you have a VMware ESX Server, you must activate the snapshot
before you add it to a storage group.
What next? To start using the snapshot, see Chapter 3, “Using SnapView”.
To set up clones, see “Setting up SnapView to use clones” on page 2-2.
Using SnapView
Using clones
◆ Clone states .........................................................................................3-2
◆ Fracturing a clone...............................................................................3-5
◆ Synchronizing a fractured clone .................................................... 3-11
◆ Reverse synchronizing a fractured clone......................................3-13
◆ Removing a clone from a clone group ..........................................3-18
◆ Destroying a clone group................................................................3-19
◆ Clone and source LUN trespasses .................................................3-20
Using snapshots
◆ Activating a snapshot ......................................................................3-21
◆ Deactivating a snapshot ..................................................................3-23
◆ Rolling back a SnapView session...................................................3-25
◆ Stopping a SnapView session.........................................................3-31
◆ Destroying a snapshot .....................................................................3-32
◆ Snapshot/SnapView session and source LUN trespasses .........3-33
Using clones
This section describes how to use clones. It also describes how
SnapView handles clone and source LUN trespasses and bad blocks
of data.
After you have added a clone to a clone group, you can fracture it (see
page 3-5) to make it available to another server, and then do any of
the following:
◆ Synchronize the fractured clone (see page 3-5)
◆ Reverse synchronize the fractured clone (see page 3-13)
◆ Remove the fractured clone from the clone group (see page 3-18)
What next? To learn about the possible states of a clone, continue to the next
section. To start using the clone, refer to one of the referenced pages
listed above.
Clone states Each clone you add to a clone group has its own state that indicates if
it contains usable data. The possible clone states are: Reverse
Synchronizing, Reverse Out-of-Sync, Synchronized, Synchronizing
Out-of-Sync, or Consistent. Depending on the state of the clone, some
operations may be unavailable (refer to Table 3-1 onpage 3-3).
When you remove a clone from the clone group, it is no longer
associated with its source LUN or clone group. It retains the copied
data and becomes a conventional (regular) LUN.
Note: Table 3-1, on page 3-3, lists when the clone is available for server I/O.
The source LUN you specify when creating a clone group is available for
server I/O during any clone state except for a Reverse Out-of-Sync state. Any
server writes made to the source LUN during a reverse synchronization are
copied to the clone. If you do not want incoming source writes copied to the
clone during a reverse synchronization, you must select the Protected
Restore feature in the Add Clone or Clone Properties - Clone LUN tab
dialog box before issuing a reverse synchronization. However, before you can
select the Protected Restore feature, it must be globally enabled by selecting
the Allow Protected Restore option in the Clone Features Properties dialog
box.
Clone
available
Clone state Description Cause of state Permitted operations for I/O
Consistent A clone was in a • A clone is fractured while in a Consistent or • Fracture (only if Yes, if
Synchronized state Synchronized state. clone is not already clone is
and received • A clone is unfractured and has yet to fractured) fractured
incoming server transition to a Synchronized state. • Remove (only if the
writes from the source clone is fractured)
(if the clone is • Synchronize (only if
unfractured) or to the clone is fractured)
clone (if the clone is
fractured). A clone in • Reverse
a Consistent state is Synchronize (only if
usable but may not clone is fractured)
contain the most
up-to-date information
since writes made to
the source may not
have been copied to
the clone.
Reverse A clone was in the A reverse synchronization operation failed to • Reverse synchronize Yes
Out-of-Sync process of a reverse complete successfully. • Remove
synchronization but • Fracture (only if the
failed, and therefore, clone was fractured
the source is unusable by the system due to
and another reverse an error in the
synchronization software or storage
operation is system; refer to the
recommended. Event Log for the
cause of the system
fracture)
Clone
available
Clone state Description Cause of state Permitted operations for I/O
Fracturing a clone
When you fracture a clone or a set of clones (consistent fracture), you
split the clone(s) from its source LUN to make it available to a
secondary server. A secondary server can access the fractured
clone(s) if the clone belongs to a storage group that is connected to
the secondary server. The secondary server can then use the clone for
operations such as system backups, data modeling, or software
application testing.
Note: Unless you have additional software that supports same host access,
you must assign the clone LUN to a storage group other than the storage
group that holds the source LUN(s). You also must assign multiple fractured
clones, of the same source LUN(s), to different storage groups. For
information on software that supports same host access, refer to the
“Prerequisites for setting up clones” on page 2-2.
Consistent fracture A consistent fracture is when you fracture more than one clone at the
same time in order to preserve the point-in-time restartable copy
across the set of clones. The SnapView driver will delay any I/O
requests to the source LUNs of the selected clones until the fracture
has completed on all clones (thus preserving the point-in-time
restartable copy on the entire set of clones).
Note: To verify if an error occurred, refer to the event log and check if the
value for SourceMediaFailure or CloneMediaFailure is set to TRUE. If
an error did occur, you must correct the failure, then re-issue the
synchronization or reverse synchronization operation. If the error
persists, contact you EMC service provider.
To fracture a clone Note: For additional information on the admsnap commands described
below, refer to the EMC SnapView Command Line Interfaces Reference.
With some operating systems, you may need to shut down the
application to flush the data. Specific operating systems have
different requirements.
2. Using Navisphere from any client that is managing the storage
system, do the following:
a. From the Storage tab of the Enterprise Storage dialog box,
navigate to the Clones icon, and then to the Clone Group
Name icon(s).
b. Right-click the Clone Group Name icon that contains the
clone you want to fracture and select Properties.
c. Select the Clone tab for the clone you want to fracture and
verify that its state is Synchronized. If its state is not
Synchronized, you must wait until it is Synchronized before
closing the dialog box and continuing to the next step. If you
are fracturing more than one clone (referred to as a consistent
fracture), verify the state of each clone you plan on fracturing.
Note: The state of the clone may also be Consistent. Refer to “When
to fracture a clone LUN” on page 3-7.
Note: If you are fracturing more than one clone, the clones you want
to fracture must be within different clone groups.
c. Power on the virtual machine and scan the bus at the virtual
machine level. For virtual machine s running Windows, you
can use the admsnap activate command to rescan the bus.
Note: If the same chunk of data is modified on the source LUN more than
once, only the last modification is copied to the clone.
During a synchronization
While the clone is synchronizing, you:
◆ cannot remove the clone in a Synchronizing state.
◆ cannot perform a reverse synchronization with any other clone in
the clone group.
Note: You must explicitly follow the procedure for synchronizing a clone to
avoid data loss. For additional information on the admsnap commands
described below, refer to the EMC SnapView Command Line Interfaces Reference.
Note: If you modify the same data chunk on the clone more than once, the
software copies only the last modification to the source LUN.
Note: If you check the Use Protected Restore feature, after the reverse
synchronization has completed, SnapView fractures the clone that initiated
the reverse synchronization.
Note: Before you can select the Protected Restore feature, you must globally
enable it by selecting the Allow Protected Restore option in the Clone
Features Properties dialog box. When you select this option, the SnapView
driver automatically allocates 8 MB in additional memory per SP. The
additional memory is fixed and is used to monitor modified blocks on the
source LUN, in order to prevent these blocks from being overwritten by the
clone during a reverse synchronization. This additional memory counts
against the total memory budget for storage-system-based drivers.
Note: You must explicitly follow the procedure for reverse synchronizing a
clone to avoid data loss. For additional information on the admsnap
commands described below, refer to the EMC SnapView Command Line
Interfaces Reference.
Note: To use the Protected Restore feature, you must select it from the
Clone Properties - Clone LUN tab before initiating a reverse
synchronization.
Note: If only minor differences exist between the clone and its source,
the software may not have time to transition the state of the clone to
reverse synchronization. This means that the state of the clone is still
displayed as Consistent in Navisphere, even though the reverse
synchronization was successful.
The application removes the clone from its clone group. This
clone is now a conventional LUN and it no longer counts against
the clone and mirror limits.
Using snapshots
This section describes how to use and destroy snapshots. This section
also describes how SnapView handles snapshot and source LUN(s)
trespasses.
After you have started a SnapView session, you can do any of the
following:
◆ Activate a snapshot (see the next section on this page)
◆ Deactivate a snapshot (see page 3-23)
◆ Recover source LUN data with the rollback feature
(see page 3-25)
◆ Stop a session (see page 3-31)
Activating a snapshot
The Navisphere Manager activate option maps the snapshot to a
SnapView session. When you administer the activate option from
Navisphere, you must reboot the secondary server, or use some other
means, so that this server recognizes the new device created when
you started the session. When you execute the activate command
from the admsnap server utility, the command scans the secondary
server’s system buses for storage-system devices and determines if
any device is part of a SnapView session. To use admsnap to activate
a snapshot, refer to the EMC SnapView Command Line Interfaces
Reference.
A secondary server can activate a snapshot to any SnapView session
on the same source LUN as the snapshot. Once a secondary server
activates a snapshot to a session, this server can write to the activated
snapshot. The software stores all writes made to the snapshot in the
reserved LUN pool. If the secondary server deactivates the snapshot
from the session, the software destroys all writes made to the session.
You can create up to eight snapshots and activate (map) each
snapshot to a single session provided that there is a different server
for each snapshot. Only one snapshot at a time can activate a session.
Note: The production and secondary servers must be running the same
operating system (not a requirement for raw data access).
To activate a snapshot
1. From the secondary server in the Enterprise Storage dialog box,
click the Storage tab.
2. Navigate to the snapshot you want to activate and select Activate
Snapshot.
3. In Available Sessions, select the session name to which you want
to map (activate) the snapshot.
4. Click OK to activate the snapshot to the session you selected.
If the action is successful, Navisphere closes the dialog box.
Otherwise, Navisphere displays an error message and the dialog
box remains open.
5. If you do not have a VMware ESX Server - Reboot the secondary
server, or use some other means, such as the admsnap activate
command, so that it recognizes the new device created when you
started the session.
Deactivating a snapshot
The deactivate function unmaps a snapshot from a SnapView session
and destroys any secondary server writes made to the snapshot. The
snapshot and session still exist but are not visible from the secondary
server.
The secondary server must deactivate a snapshot before mapping it
to another SnapView session. For example, if you start eight
SnapView sessions on a single source LUN and create one snapshot
for this same source LUN, a secondary server can activate (map) only
one of the sessions at a time with the snapshot. If this secondary
server wants to activate its snapshot to one of the other seven
sessions, it must deactivate the snapshot and then activate it to
another session.
To deactivate a snapshot
1. From the production server, flush all cached data to the source
LUN(s) of the SnapView session by issuing the appropriate
command for your operating system.
• For a Windows server, use the admsnap flush command.
Note: For a Windows 2000 server, after issuing the admsnap flush
command, delete the drive letter.
• For Solaris, HP-UX, AIX, and Linux servers, unmount the file
system by issuing the umount command. If you are unable to
unmount the file system, issue the admsnap flush command.
The flush command flushes all cached data.
• For an IRIX server, the admsnap flush command is not
supported. Unmount the file system by issuing the umount
command. If you cannot unmount the file system, use the sync
command to flush cached data. The sync command reduces
the number of times you need to issue the fsck command on
the secondary server’s file system. Refer to your system's man
pages for sync command usage.
• On a Novell NetWare server, use the dismount command on
the volume to dismount the file system.
Note: Neither the flush command nor the sync command is a substitute
for unmounting the file system. Both commands only complement
unmounting the file system.
With some operating systems, you may need to shut down the
application to flush the data. Specific operating systems have
different requirements. For more information, see the product
release notes.
2. From the secondary server in the Enterprise Storage dialog box,
click the Storage tab.
3. Navigate to the snapshot you want to deactivate and select
Deactivate Snapshot.
Note: The Navisphere Manager deactivate function does not flush all
data on the secondary server. To flush I/O from this server, use the
admsnap deactivate command on this server or any
operating-system-specific commands to accomplish this task.
4. A message appears stating that you may need to flush I/O on the
server operating system that is viewing the snapshot. If you click
Cancel, no action is performed. If you click Yes, this deactivates
the snapshot you selected from the SnapView session and
destroys any writes made to the snapshot.
Note: The rollback operation itself does not count against the eight-session
limit per source LUN. Starting a rollback recovery session will count as a
single session against this limit.
To start a rollback Note: For Windows servers - To prevent data corruption during the rollback
operation, you should disable the indexing service and recycle bin on the
source LUN(s) of the session you will roll back.
Note: For a Windows 2000 server, after issuing the admsnap flush
command, delete the drive letter.
• For Solaris, HP-UX, AIX, and Linux servers, unmount the file
system by issuing the umount command. If you are unable to
unmount the file system, issue the admsnap flush command.
The flush command flushes all cached data.
• For an IRIX server, the admsnap flush command is not
supported. Unmount the file system by issuing the umount
command. If you cannot unmount the file system, use the sync
command to flush cached data. The sync command reduces
the number of times you need to issue the fsck command on
the secondary server’s file system. Refer to your system's man
pages for sync command usage.
• On a Novell NetWare server, use the dismount command on
the volume to dismount the file system.
Note: Neither the flush command nor the sync command is a substitute
for unmounting the file system. Both commands only complement
unmounting the file system.
With some operating systems, you may need to shut down the
application to flush the data. Specific operating systems have
different requirements.
3. If the session you want to roll back has an activated snapshot, and
you want to keep any server writes made to this snapshot, you
must unmount the snapshot by doing one of the following from
the secondary server:
• For a Windows operating system, use the clone_deactivate
command.
Note: A source LUN(s) can have only one session rolling back at a
time. There is no limit to the number of concurrent rollback
operations you can have on a storage system.
Note: Once you start a rollback operation, you cannot stop it or the
session that is being rolled back.
Note: Be sure to verify that you have enough reserved LUNs in the SP’s
LUN pool before resuming I/O to the source LUN(s). Refer to
“Allocating reserved LUN pool space” on page 3-25.
Note: Server writes made to the source LUN(s) while the rollback is in
progress will overwrite the data being rolled back.
Note: If you started a session on multiple source LUNs, you can select
any of the source LUNs to stop the session.
Destroying a snapshot
When you destroy a snapshot, the following is true:
◆ If the snapshot is inactive, the software destroys only the selected
snapshot.
◆ If the snapshot is active, a warning message appears indicating
that you should deactivate the snapshot before destroying it. If
you accept the warning message, the software deactivates the
snapshot, and destroys it (the snapshot) and any server writes
made to the snapshot.
Note: The Navisphere Manager deactivate function does not flush all
cached data on the secondary server. To flush I/O from this server, do not
accept the warning message; use the admsnap deactivate command on
this server or any operating-system-specific commands to accomplish
this task.
To destroy a snapshot 1. From any client that is managing the storage system, in the
Enterprise Storage dialog box, click the Storage tab.
2. Navigate to the snapshot you want to destroy.
3. Right-click the snapshot and select Destroy Snapshot.
4. In the confirmation dialog box, click Yes to destroy the snapshot.
The application removes the snapshot icon from the Snapshots
container in the Storage tree.
Note: If your session is not running in persistent mode, it will not trespass to
the peer SP. The software destroys your session and deactivates any activated
snapshots.
Note: Once the failed SP is running, the production server must issue a
restore command in order to restore the proper source LUNs, sessions,
snapshots, and reserved LUNs back to their original SP (for the appropriate
restore command, refer to the documentation that shipped with your failover
software).
Note: For information on how SPs manage the reserved LUN pool, refer to
the latest revision of the EMC Navisphere Manager Administrator’s Guide.
Clone properties
◆ Displaying and modifying clone properties ..................................4-2
◆ Clone properties .................................................................................4-2
◆ Clone feature properties....................................................................4-3
◆ Clone group properties .....................................................................4-3
◆ Source LUN properties......................................................................4-4
Snapshot properties
◆ Displaying and modifying snapshot properties............................4-5
◆ Snapshot name properties ................................................................4-5
◆ SnapView session properties............................................................4-6
◆ Displaying status of all snapshots and SnapView sessions .........4-7
Note: To view the properties of the reserved LUN pool, refer to the latest
revision of the EMC Navisphere Manager Administrator’s Guide.
SnapView examples
Clones example
This section provides an example of how to set up and use clones.
Note: The server names, files, and applications used in this section are
intended for example purposes only.
Summary In this example, you are creating two clones to perform software
testing on a database file and its log file. Once you have completed
testing, you decide that you want to keep the modified data and
replace the data on the source LUNs with this modified data. To do
this, you start a reverse synchronization on the clone LUNs. The
reverse synchronization will replace the contents of the source LUN
with the contents of the clone LUN.
Operations overview
Note: The following table does not provide detailed steps for each task. It is
important that you refer to the “Prerequisites for setting up clones” on
page 2-2 and to the reference sections listed below before completing any
tasks.
1. Setting up LUNs to ❑ From server ph12345 (production server), “Prerequisites for setting up
be used as clones create LUNs. clones” on page 2-2
You will need two LUNs that will become your
clone LUNs (NWDataClone and
NWLogClone). These LUNs must be the same
sizes as source LUN NWData
(Northwind.mdf) and source LUN NWLog
(Northwind.ldf), but they can be different RAID
types.
❑ From server ph12345 (production server),
assign the newly created LUNs (NWDataClone
and NWLogClone) to storage group
NW_DataLogTest and connect this storage
group to server sh12345 (secondary server).
2. Allocating clone ❑ Create two LUNs that are at least 250000 “Allocating clone private
private LUNs blocks. These LUNs will be used as clone LUNs” on page 2-4
private LUNs.
❑ From server ph12345 (production server),
allocate the two LUNs you just created as clone
private LUNs.
3. Creating a clone ❑ From server ph12345 (production server), “Creating a clone group” on
group select source LUN NWData and create a clone page 2-8
group called NWDataCG.
❑ From server ph12345 (production server),
select source LUN NWLog and create a clone
group called NWLogCG.
4. Adding a clone to ❑ From server ph12345 (production server), add “Adding a clone to a clone
the clone group LUN NWDataClone to the NWDataCG clone group” on page 2-10
group and LUN NWLogClone to the
NWLogCG clone group.
Note: Select Initial Sync Required for both clone
LUNs.
5. Fracturing the clone ❑ From server ph12345 (production server), “Clone properties” on
verify that LUN NWDataClone and LUN page 4-2
NWLogClone are in a Synchronized or
Consistent state.
“Fracturing a clone” on
❑ From server ph12345 (production server),
page 3-5
fracture the LUNs NWDataClone and
NWLogClone (clone LUNs).
6. Activating clones ❑ From the sh12345 server (secondary server), “Fracturing a clone” on
activate the clone LUNs (NWDataClone and page 3-5 (step 4)
NWLogClone).
7. Trespassing clones ❑ From server ph12345 (production server), “Clone and source LUN
trespass LUNs NWDataClone and trespasses” on page 3-20
NWLogClone (clone LUNs) to the peer SP.
11. Removing clones ❑ From server ph12345 (production server), “Removing a clone from a
from clone group remove LUN NWDataClone from the clone group” on page 3-18
NWDataCG clone group and remove LUN
NWLogClone from the NWLogCG clone
group.
12. Destroying a clone ❑ From server ph12345 (production server), “Destroying a clone group”
group destroy the NWDataCG and NWLogCG clone on page 3-19
groups.
Illustrated overview
The following section provides an illustrated description of the main
operations described in the table in the previous section.
1. Production server adds clone LUNs to clone groups and initial
synchronization begins (the contents of the source LUN are
copied to the clone LUN). I/O to the source LUNs from the
production server continues.
Production Second
Host Host
Source
LUN
Clone
LUN
Clone Group
Source
LUN
Clone
LUN
Clone Group
CPL CPL
SP A SP B
Storage System
EMC2429
Production Second
Host Host
Source
LUN
Clone
LUN
Clone Group
Source
LUN
Clone
LUN
Clone Group
CPL CPL
SP A SP B
Storage System
EMC2430
3. I/O stops to the source LUNs from the production server. The
production server then fractures and trespasses the clones to SP B.
Production Second
Host Host
Source Clone
LUN LUN
Clone Group
Source Clone
LUN LUN
Clone Group
CPL CPL
SP A SP B
Storage System
EMC2431
4. I/O resumes to the source LUNs from the production server. The
secondary server activates the clone LUNs. I/O to the clone
LUNs from the secondary server begins and software testing
starts. As I/O modifies the fractured clones and source LUNs, the
clone private LUNs record information that identifies these
modified data chunks but no actual data is written to the clone
private LUNs.
Production Second
Host Host
Source Clone
LUN LUN
Clone Group
Source Clone
LUN LUN
Clone Group
CPL CPL
SP A SP B
Storage System
EMC2440
5. Software testing stops and I/O to the clone LUNs from the
secondary server also stops.
Production Second
Host Host
Source Clone
LUN LUN
Clone Group
Source Clone
LUN LUN
Clone Group
CPL CPL
SP A SP B
Storage System
EMC2441
6. I/O stops to the source LUNs from the production server. The
production server then initiates a reverse synchronization
(without the Protected Restore feature enabled) to replace the
contents of the source LUNs with the contents of the clone LUNs.
The reverse synchronization causes the clone LUNs to trespass
back to SP A.
Production Second
Host Host
Source
LUN
Clone
LUN
Clone Group
Source
LUN
Clone
LUN
Clone Group
CPL CPL
SP A SP B
Storage System
EMC2442
Production Second
Host Host
Source
LUN
Source
LUN
CPL CPL
SP A SP B
Storage System
EMC2443
Snapshots example
This section provides an example of how to set up and use snapshots.
Note: The server names, files, and applications used in this section are
intended for example purposes only.
Summary In this example, you are starting two SnapView sessions and creating
two snapshots of a database file and its log file. You will then back up
the two snapshots onto tape.
Operations overview
Note: The following table does not provide detailed steps for each task. It is
important that you refer to the “Prerequisites for setting up snapshots” on
page 2-13 and to the reference sections listed below before completing any
tasks.
Reference
Task Task description section/document
1. Configure the ❑ On each SP, determine the size of the reserved The latest revision of the
reserved LUN pool LUN pool. EMC Navisphere Manager
Administrator’s Guide.
❑ On the storage system, bind one or more LUNs
on each SP to the size you determined for the
reserved LUN pool.
❑ From the ph13245 server (production server),
allocate the reserved LUNs to the SP’s LUN
pool.
2. Start a SnapView ❑ From the ph13245 server (production server), “Starting a SnapView
session start two SnapView sessions (NWDataSession session” on page 2-18
and NWLogSession).
3. Create a snapshot ❑ From the ph13245 server (production server), “Creating a snapshot” on
create two snapshots (NWDataSnap and page 2-25
NWLogSnap).
4. Add the snapshot to ❑ From the ph13245 server (production server), “Adding a snapshot to a
a storage group add snapshot NWDataSnap and snapshot storage group” on page 2-28
NWLogSnap to storage group
NWDataLog_backup.
5. Activate the ❑ From the sh12345 server (secondary server), “Activating a snapshot” on
snapshot activate the NWDataSnap snapshot to the page 3-21
NWDataSession sessios.
❑ From the sh12345 server (secondary server),
activate the NWLogSnap snapshot to the
NWLogSession sessios.
Reference
Task Task description section/document
8. Deactivate the ❑ From the sh12345 server (secondary server), “Deactivating a snapshot” on
snapshot deactivate the snapshots (NWDataSnap and page 3-23
NWLogSnap).
9. Stop the SnapView ❑ From the ph12345 server (production server), “Stopping a SnapView
session stop the SnapView sessions (NWDataSession session” on page 3-31
and NWLogSession).
Illustrated overview
The following section provides an illustrated description of the main
operations described in the table in the previous section.
Note: I/O to the source LUNs from the production server continues while
backing up the snapshots.
Production Second
Host Host
Reserved
LUN Pool
Source
LUN
9:00am Session
Reserved
LUN Pool
Source
LUN
9:01am Session
EMC2756
Production Second
Host Host
Reserved
LUN Pool
Source
Snapshot
LUN
9:00am Session
Reserved
LUN Pool
Source
Snapshot
LUN
9:01am Session
EMC2757
Production Second
Host Host
Reserved
LUN Pool
Source
Snapshot
LUN
9:00am Session
Reserved
LUN Pool
Source
Snapshot
LUN
9:01am Session
EMC2758
Production Second
Host Host
Reserved
LUN Pool
Source
Snapshot
LUN
9:00am Session
Reserved
LUN Pool
Source
Snapshot
LUN
9:01am Session
EMC2757
Production Second
Host Host
Reserved
LUN Pool
Source
Snapshot
LUN
Reserved
LUN Pool
Source
Snapshot
LUN
EMC2759
Note: The server names, files, and applications used in this section are
intended for example purposes only.
Operations overview
Note: The following table does not provide detailed steps for each task. It is
important that you refer to the “Prerequisites for setting up snapshots” on
page 2-13 and to the reference sections listed below before completing any
tasks.
Reference
Task Task description section/document
1. Configure the ❑ On each SP, determine the size of the reserved LUN The latest revision of the
reserved LUN pool pool. EMC Navisphere Manager
Administrator’s Guide.
❑ On the storage system, bind one or more LUNs on
each SP to the size you determined for the reserved
LUN pool.
❑ From the ph13245 server (production server),
allocate the reserved LUNs to the SP’s LUN pool.
2. Create a snapshot ❑ From the ph13245 server (production server), “Creating a snapshot” on
create two snapshots (OracleDataSnap and page 2-25
OracleLogSnap).
3. Start a SnapView ❑ Monday - From the ph13245 server (production “Starting a SnapView
session server), start a SnapView session session” on page 2-18
(OracleSession1).
4. Add the snapshot to ❑ From the ph13245 server (production server), add “Adding a snapshot to a
a storage group snapshot OracleDataSnap and snapshot storage group” on page 2-28
OracleLogSnap to storage group
OracleDB_backup.
Reference
Task Task description section/document
5. Activate the ❑ From the sh12345 server (secondary server), “Activating a snapshot” on
snapshot activate the snapshots (OracleDataSnap and page 3-21
OracleLogSnap) to Friday’s SnapView session
(OracleSession5).
7. Deactivate the ❑ While viewing Friday’s session, you realize that the “Deactivating a snapshot” on
snapshot database and its log file are corrupted or contain page 3-23
changes that you do not want. So from the sh12345
server (secondary server), deactivate the snapshots
(OracleDataSnap and OracleLogSnap) from
Friday’s SnapView session (OracleSession5), so
you can view the sessions that were started earlier
in the week.
9. Start rollback ❑ From the ph12345 server (production server), start “Rolling back a SnapView
operation the rollback operation on Tuesday’s SnapView session” on page 3-25
session (OracleSession2).
When you confirm the start of a rollback operation,
the source LUN can instantly access the session’s
point-in-time data, while data copying continues in
the background.
10. Continue daily ❑ Once the rollback completes, which includes all “Starting a SnapView
sessions background copying, from the ph12345 server session” on page 2-18
(production server), resume starting your daily
sessions.
Illustrated overview
The following section provides an illustrated description of the main
operations described in the table in the previous section.
1. Production server creates the snapshots.
Production Second
Host Host
Reserved
LUN Pool
Source
Snapshot
LUN
Reserved
LUN Pool
Source
Snapshot
LUN
EMC2759
Production Second
Host Host
Reserved
LUN Pool
Source
Snapshot
LUN
8:00am
Daily
Sessions
Reserved
LUN Pool
Source
Snapshot
LUN
EMC2760
Production Second
Host Host
Reserved
LUN Pool
Source
Snapshot
LUN
8:00am
Friday's
Sessions
Reserved
LUN Pool
Source
Snapshot
LUN
EMC2761
Production Second
Host Host
Reserved
LUN Pool
Source
Snapshot
LUN
8:00am
Tuesday's
Sessions
Reserved
LUN Pool
Source
Snapshot
LUN
EMC2762
Production Second
Host Host
Source
Reserved
LUN Pool
Snapshot
LUN
8:00am
Tuesday's
Sessions
Reserved
LUN Pool
Source
Snapshot
LUN
EMC2763
This appendix describes what bad blocks are, how SnapView handles
them, and what you can do to correct them.
Major sections in this appendix are:
◆ Bad blocks overview......................................................................... B-2
◆ Bad blocks and clones....................................................................... B-3
◆ Bad blocks and rollback ................................................................... B-4
Prerequisites
◆ Determining a Tru64 source LUN................................................... C-2
Clones
◆ Setting up clones ............................................................................... C-8
◆ Using clones ....................................................................................... C-9
Snapshots
◆ Setting up snapshots....................................................................... C-13
◆ Using snapshots .............................................................................. C-14
1. Select the file system you want to copy and show where it is
mounted with the mount command. The following is a example
output for the server with the /source file system:
root_domain#root on / type advfs (rw)
/proc on /proc type procfs (rw)
usr_domain#usr on /usr type advfs (rw)
usr_domain#var on /var type advfs (rw)
source_domain#source_fset on /source type advfs (rw)
Note: The /source file system is listed as a mount point of the source_fset
AdvFS fileset, which is part of the source_domain AdvFS domain.
Note: For ufs file systems the disk device is part of the mount command
output so you can skip this step.
3. Determine the LUN and SCSI bus number associated with the
disk device.
a. Use the hwmgr –view devices –dsf command to determine
the hardware identifier (HWID) of the particular device. The
output is similar to the following:
hwmgr –view devices –dsf /dev/disk/dsk347c
HWID: Device Name Mfg Model Location
417:/dev/disk/dsk347c DGC RAID 5 IDENTIFIER=264
WWID:01000010:6006-0173-1460-0000-9518-a222-272d-d6
11
6. From the above output, verify that this adapter is SCSI bus
number 5 and that target IDs 0 and 1 are portnames
5006-0160-4004-D3A5 and 5006-0168-4004-D3A5 respectively.
The complete World Wide Name is constructed by prefacing the
port name with the node name.
7. In Navisphere Manager, display the properties of each storage
group attached to the server by right-clicking the storage group
name and selecting Properties. From the Storage Group
Properties dialog box, click the Advanced tab.
Verify that the SP Port World Wide Name information for both
SPs matches the target information from step 5.
where:
SP-servername is the server name of the storage system.
storage-groupname is the name of the storage group.
20:00:00:00:C9:24:0C:D5:10:00:00:00:C9:24:0C:D5
SP A 0
20:00:00:00:C9:24:0C:D5:10:00:00:00:C9:24:0C:D5
SP B 0
HLU/ALU Pairs:
HLU Number ALU Number
0 0
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 113
Shareable: NO
HLU Number 8 maps to ALU Number 113. Thus storage system LUN
113 is the logical unit that we want to use as the source for the
SnapView operation.
What next? To use SnapView clones, continue to the next section, “Setting up
clones”.
To use SnapView snapshots, go to “Using snapshots” on page C-14.
Setting up clones
To set up clones with a Tru64 server, you must determine a source
LUN, as described on page C-2. Once you have determined a source
LUN, you must allocate clone private LUNs. Then you can create a
clone group and add a clone to that group.
Note: The order of the following steps may vary between operating systems,
depending on the utilities that are available for a particular environment.
For a detailed description of how to perform each task, refer to the reference
section listed.
Note: Allow the clone to synchronize with the source LUN before
fracturing the clone. Another server cannot use the clone until it is
fractured. You should fracture the clone only while it is in a
Synchronized or Consistent state.
4. Fracture the clone from the clone group using Navisphere (see
“Fracturing a clone” on page 3-5).
Using clones
This section describes how a secondary server can activate and access
a clone.
ufs file system In the case of a ufs file system, all file system information will reside
on the disk and you do not need to take any additional steps to
identify the file system to the server. When you create a clone of a ufs
file system, the file system will be dirty unless it was not mounted at
the time the clone was added to the clone group. When attempting to
mount a dirty file system a message such as the following will be
displayed:
/dev/disk/dsk352c on /ufscopy: Dirty file system
AdvFS file system To access an AdvFS file system that has been snapped, you must take
several steps to properly inform the backup server of the file system.
An AdvFS file system exists on a file domain that contains links to the
device special files that are part of the domain. This link has to be
created manually, since the standard mkfdmn command will write
new information to the device thus destroying the clone.
The file domain information is kept in a set of subdirectories of
/etc/fdmns. To create the correct information on the secondary server,
complete the following steps:
1. Create the following subdirectory in /etc/fdmns:
mkdir /etc/fdmns/new_domain_name
where
new_domain_name is the name you want to give this domain on
the backup server.
Note: The domain name must be unique to the server; for example,
mkdir /etc/fdmns/CopyDomain.
2. Create a symbolic link between the device special file and the new
domain as follows:
ln –s dev_special_file /etc/fdmns/new_domain_name/dev_special_file
This link points AdvFS to the correct device and the file-set
information on this device. For example:
ln –s /dev/disk/dsk351c /etc/fdmns/CopyDomain/dsk351c
Note: Use the device name within the domain to identify the particular
device.
At this point the server has the required information to access the
AdvFS file domain and file set. The file-set information will stay
the same regardless of the domain name; that is if the file set was
named source on the source server, it will still be named source
on the backup server.
3. Create a mount-point directory, if necessary, as follows:
mkdir mountpoint
Setting up snapshots
To set up snapshots with a Tru64 server, you must determine a source
LUN, as described on page C-2. Once you have determined a source
LUN, you must configure the reserved LUN pool. Then you can
create a snapshot and start a SnapView session.
Note: The order of the following steps may vary between operating systems,
depending on the utilities that are available for a particular environment.
For a detailed description of how to perform each task, refer to the reference
section listed.
Note: You can start a SnapView session first or create a snapshot first.
However, a secondary server cannot view the session data unless a
snapshot is activated to the session and that snapshot is accessible to the
secondary server.
What next? To activate and access the snapshot, continue to the next section.
Using snapshots
This section describes how a secondary server can activate and access
a snapshot.
Activating a snapshot
To activate the snapshot from a secondary server, do the following:
1. Activate the snapshot (see “Activating a snapshot” on page 3-21).
2. Enter the following command to initiate the update of the SCSI
device database:
hwmgr –scan scsi
Once the system completes this command, the snapshot LUN is
accessible to the secondary server.
Note: You can use the Navisphere CLI command from a non-Tru64
server to identify the server LUN number of the new snapshot LUN, as
described in the “Determining a Tru64 source LUN” on page C-2.
ufs file system In the case of a ufs file system, all file system information will reside
on the disk and you do not need to take any additional steps to
identify the file system to the server. When a snapshot is taken of a
ufs file system, the file system will be dirty unless it was not mounted
at the time of the snapshot. When attempting to mount a dirty file
system a message such as the following will be displayed:
/dev/disk/dsk352c on /ufscopy: Dirty file system
AdvFS file system To access an AdvFS file system that has been snapped, you must take
several steps to properly inform the backup server of the file system.
An AdvFS file system exists on a file domain that contains links to the
device special files that are part of the domain. This link has to be
created manually, since the standard mkfdmn command will write
new information to the device thus destroying the snapshot.
The file domain information is kept in a set of subdirectories of
/etc/fdmns. To create the correct information on the snapshot server,
complete the following steps:
1. Create the following subdirectory in /etc/fdmns:
mkdir /etc/fdmns/new_domain_name
where
new_domain_name is the name you want to give this domain on
the backup server.
Note: The domain name must be unique to the server; for example,
mkdir /etc/fdmns/CopyDomain.
2. Create a symbolic link between the device special file and the new
domain as follows:
ln –s dev_special_file /etc/fdmns/new_domain_name/dev_special_file
This link points AdvFS to the correct device and the file-set
information on this device. For example:
ln –s /dev/disk/dsk351c /etc/fdmns/CopyDomain/dsk351c
Note: Use the device name within the domain to identify the particular
device.
At this point the server has the required information to access the
AdvFS file domain and file set. The file-set information will stay
the same regardless of the domain name; that is if the file set was
named source on the source server, it will still be named source
on the backup server.
3. Create a mount-point directory, if necessary, as follows:
mkdir mountpoint
A
Active A snapshot is currently participating in a SnapView session and is
accessible to secondary servers.
B
Business Continuance Another term used for clones. See Clone.
Volumes (BCVs)
C
Chunk An aggregate of multiple disk blocks that SnapView uses to perform
copy-on-first-write operations. The selectable chunk sizes are
16 KB, 32 KB, 64 KB, 128 KB, 256 KB, and 512 KB. The default size is
64 Kbytes (128 blocks in Navisphere). For SnapView version 2.1 or
higher, the chunk size is set to 64K (128 blocks). You cannot change
this value.
Clone A LUN that is an actual copy of a specified source LUN. The state of
the clone determines if it is a byte-for-byte copy of its source. You
create a clone when you add a clone to the clone group.
Clone group A collection of a source LUN and all of its clones. The purpose of
creating a clone group is to establish a source LUN that you may
want to clone at some time.
Clone private LUNs LUNs that record information that identifies areas on the source and
clone that have changed since the clone was fractured. A log in the
clone private LUN records this information but no actual data is
written to the clone private LUN. This log is a bitmap and reduces the
time it takes to synchronize and reverse synchronize a clone and its
source.
Clone state The state of each clone in a clone group. The state of the clone
determines whether or not the clone is usable. The possible clone
states are Consistent, Out-of-Sync, Reverse Out-of-Sync, Reverse
Synchronizing, Synchronized, or Synchronizing.
Consistent fracture Fracturing more than one clone at the same time. The clones you
want to fracture must be within different clone groups. You cannot
perform a consistent fracture on clones belonging to different storage
systems. After the consistent fracture completes, there is no group
association between the clones.
Consistent mode Preserves the point-in-time copy across a set of source LUNs. The
SnapView driver will any delay any I/O requests to the set of source
LUNs until the session has started on all LUNs (thus preserving the
point-in-time on the entire set of LUNs).
Consistent state A clone in a Synchronized state that receives server I/O to the source
(if the clone is unfractured) or to the clone (if the clone is fractured). A
consistent clone is usable but may not contain the most up-to-date
information since writes made to the source have not been copied to
the clone.
D
Deactivate An operation on a snapshot that unmaps it from a SnapView session
to make it invisible to any secondary servers. The software destroys
any writes made to the snapshot but the snapshot and SnapView
session still exist. This feature is available in Navisphere Manager
and admsnap; however, the Manager deactivate function does not
flush all data and clear all buffers on the secondary server.
F
Fracture The process of breaking off a clone from its source. Once a clone is
fractured, it can receive server I/O requests.
H
Host Agent Navisphere Agent that runs on a server system.
I
Inactive A snapshot that is not currently participating in a SnapView session
and is invisible to any secondary servers.
M
Modified data chunk A chunk of data that a server changes by writing to the clone,
snapshot, or source LUN.
N
Navisphere Manager The EMC Navisphere Manager application.
O
Out-of-Sync state A clone that was in the process of synchronizing but failed. An
Out-of-Sync clone is not a byte-for-byte copy of its source LUN and
therefore, is unusable.
P
Persistent mode Creates a session that can withstand an SP reboot or failure, a storage
system reboot or power failure, or server I/O trespassing to the peer
SP.
Private LUN A LUN that cannot be assigned to a storage group. Once you add a
LUN to the reserved LUN pool or allocate a LUN as a clone private
LUN, it becomes a private LUN.
Protected restore When selected, a process that prevents source writes from being
copied to the clone during a reverse synchronization.
Q
Quiesce threshold The time period after which, without I/O from the server, any clone
in the Consistent state and not fractured is transitioned to a
Synchronized state. You specify the quiesce threshold when you
create a clone group.Valid values are 10 – 3600 seconds. The default is
60 seconds.
R
Recovery policy The policy used to determine how a clone is recovered after a failure.
Options are auto or manual.
Reserved LUN A private LUN (a LUN to which a server cannot perform I/O)
assigned to an SP’s reserved LUN pool.
Reserved LUN pool The disk storage used to store blocks of original data chunks when
you first modify that chunk on the source LUN(s) after the start of a
session. Each SP manages its own LUN pool space and assigns a
separate reserved LUN (or multiple LUNs) to each source LUN.
Reserved sessions Sessions used for another application such as SAN Copy and
MirrorView/Asynchronous.
Reserved snapshots Snapshots used for another application such as SAN Copy and
MirrorView/Asynchronous.
Restartable copy A data state having dependent write consistency and where all
internal database/application control information is consistent with a
database management system/application image.
Reverse Out-of-Sync A clone that was in the process of reverse synchronizing but failed.
state Therefore, the source LUN is unusable and another reverse
synchronization is recommended.
Reverse Synchronizing A clone that is unfractured and in the process of copying its data to its
state source LUN.
S
Snapshot Views a point-in-time image of a source LUN(s). A snapshot occupies
no disk space, but appears like a normal LUN to secondary servers
and can serve for backup or another use.
Other, older terms for snapshot, which are no longer used, include
SnapshotCopy LUN (SCLUN) and SnapCopy LUN (SLU).
SnapView session The period of time that SnapView is managing a reserved LUN pool
region. The SnapView session begins when you start a session using
Navisphere Manager, Navisphere CLI, or admsnap and ends when
you stop the session. You can give each session a name (the session
name) when you start the session. The name persists throughout the
session and is viewable through Navisphere. You use the name to
check session status and to end the session.
Source LUN The original LUN from which a clone or snapshot is generated. An
older term for source LUN, which is no longer used, is Target LUN
(TLU).
Synchronization rate Specifies a relative value (low, medium, or high) for the priority of
completing updates. High completes updates faster, but may
significantly affect storage system performance for host I/O requests.
Low completes updates slower, but also minimizes the impact on
other storage-system operations.
Synchronized state A clone that is a byte-for-byte copy of its source and, therefore, is
usable.
Synchronizing state An unfractured clone that is in the process of copying data from its
source LUN.
A properties of 4-2
access, snapshot 2-28 removing from clone group 3-18
activate, snapshot 3-21 setting up 2-2
adding to a storage group 2-28 states 3-3, g-2
admsnap 1-11 clone group
definition of g-1 adding a clone to 2-10, 2-11
introduction to 1-9 creating 2-8
AdvFS file system, see Tru64 server definition of 2-8, g-2
destroying 3-19
properties 4-3
B removing clone from 3-18
backup clone private LUNs
see also secondary server allocating 2-4, 2-5
benefit with SnapView 1-2 deallocating 2-6
snapshot example 5-10 definition of g-2
bad blocks reallocating 2-6
on clones B-3 clone state
on rollback B-4 consistent 3-3, g-2
basic storage component icons 1-17 out-of-sync 3-3, g-4
business continuance volumes (BCVs), see clone reverse out-of-sync 3-3, g-5
reverse synchronizing 3-4, g-5
C synchronized 3-4, g-6
chunk g-1 synchronizing 3-4, g-6
chunk, modified data g-3 consistent clone state, definition of 3-3, g-2
CLI 1-10 copy-on-first-write, definition of 1-5, g-3
clone
compared to snapshot 1-7 D
creating 2-10, 2-11 decision support
definition of 1-3, g-2 see also secondary server
dirty 4-2 benefit with SnapView 1-2
example 5-2 dirty clone 4-2
fracturing 3-5
ID 1-3, 2-10, 4-2
overview 1-3
F Q
fractured clone, remove 3-18 quiesce threshold
fracturing a clone 3-5 definition of g-4
fracturing, definition of g-3 setting the 2-9
G R
grep command, see Tru64 server reserved
LUN, definition of g-4
sessions, definition of g-4
I reserved LUN pool
icons, basic storage components 1-17 definition of g-4
inactive snapshot 2-27, g-3 icon for 1-17
with SnapView 2-16
L reverse out-of-sync clone state, definition of 3-3,
LUN expansion 2-9, 2-12, 2-21 g-5
LUNs 2-4 reverse synchronizing
bind 2-4 clone state, definition of g-5
fractured clone 3-13
protected restore feature 3-14
M revision testing, see secondary server
metaLUN 2-9, 2-12, 2-21 revision testing, benefit with SnapView 1-2
modified data chunk g-3 rollback 3-25
multiple snapshots 2-25 background copying 3-25, 3-29
multiple SnapView sessions 2-25 definition of 3-25
progress of 3-29
N rate 3-29
recovery session 3-25
Navisphere Manager 1-10
session 3-25
starting 3-26, 3-27
O
out-of-sync clone state, definition of 3-3, g-4
S
secondary server 1-14
P server
private LUN g-4 production 1-14
properties secondary 1-14
clone 4-2 setting up clones 2-2
Clone Feature Properties 4-3 snapshot
clone group 4-3 activating 3-21
snapshot 4-5 compared to clone 1-7
snapshots and SnapView sessions, creating a 2-25
displaying all 4-7 deactivating 3-23, g-3