VERITAS NetBackup 5.1 Advanced Client System Administrators Guide For UNIX and Windows
VERITAS NetBackup 5.1 Advanced Client System Administrators Guide For UNIX and Windows
N12348C
Disclaimer
The information contained in this publication is subject to change without notice. VERITAS Software
Corporation makes no warranty of any kind with regard to this manual, including, but not limited to,
the implied warranties of merchantability and fitness for a particular purpose. VERITAS Software
Corporation shall not be liable for errors contained herein or for incidental or consequential damages
in connection with the furnishing, performance, or use of this manual.
logo, and all other VERITAS product names and slogans are trademarks or registered trademarks of
VERITAS Software Corporation. VERITAS, NetBackup, the VERITAS logo, Reg. U.S. Pat. & Tm. Off.
Other product names and/or slogans mentioned herein may be trademarks or registered trademarks
USA
www.veritas.com
Third-Party Copyrights
ACE 5.2A: ACE(TM) is copyrighted by Douglas C.Schmidt and his research group at Washington University and University of California, Irvine,
Copyright (c) 1993-2002, all rights reserved.
IBM XML for C++ (XML4C) 3.5.1: Copyright (c) 1999,2000,2001 Compaq Computer Corporation; Copyright (c) 1999,2000,2001 Hewlett-Packard
Company; Copyright (c) 1999,2000,2001 IBM Corporation; Copyright (c) 1999,2000,2001 Hummingbird Communications Ltd.; Copyright (c)
1999,2000,2001 Silicon Graphics, Inc.; Copyright (c) 1999,2000,2001 Sun Microsystems, Inc.; Copyright (c) 1999,2000,2001 The Open Group; All
rights reserved.
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to
deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, and/or sell copies
of the Software, and to permit persons to whom the Software is furnished to do so, provided that the above copyright notice(s) and this permission
notice appear in all copies of the Software and that both the above copyright notice(s) and this permission notice appear in supporting
documentation.
This product includes software developed by the Apache Software Foundation (http://www.apache.org/).
JacORB 1.4.1: The licensed software is covered by the GNU Library General Public License, Version 2, June 1991.
Open SSL 0.9.6: This product includes software developed by the OpenSSL Project * for use in the OpenSSL Toolkit. (http://www.openssl.org/)
TAO (ACE ORB) 1.2a: TAO(TM) is copyrighted by Douglas C. Schmidt and his research group at Washington University and University of
California, Irvine, Copyright (c) 1993-2002, all rights reserved.
Contents
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xvii
Conventions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .xvii
Chapter 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Snapshots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Offhost Backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
Instant Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
FlashBackup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
BLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Snapshot Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
Copy-on-Write . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Mirror . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
Snapshot methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
iii
File/Volume Mapping Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
Chapter 2. Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Prerequisites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Supported Peripherals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
Configuration Flowcharts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
Device Checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3pc.conf Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
Determining Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
mover.conf Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
SCSI Reserve/Release . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
Contents v
Basic Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Example configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Configuration Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Snapshot Tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
Restrictions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
nbu_snap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
VxFS_Checkpoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
Contents vii
VxFS_Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
vxvm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
FlashSnap . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
VVR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
NAS_Snapshot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
ALL_LOCAL_DRIVES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Multiplexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
Contents ix
Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
snapon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
snaplist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
snapcachelist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257
snapstat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
snapoff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 259
Contents xi
This guide explains how to install, configure, and use VERITAS NetBackup Advanced
Client. This guide is intended for the NetBackup system administrator and assumes a
thorough working knowledge of UNIX or Windows, and of NetBackup administration.
Note Advanced Client combines the features formerly provided by several products:
Core Frozen Image Services, Extended Frozen Image Services (Array Integration
Option), Offhost and SAN Data Movement Services, FlashBackup, Persistent
Frozen Image, and Oracle BLI Agent. This Advanced Client manual replaces the
following three guides: NetBackup ServerFree Agent, NetBackup Persistent Frozen
Image, and NetBackup FlashBackup.
Chapter Description
xiii
What is In This Manual?
Chapter Description
Instant Recovery Configuration Explains how to prepare for and configure an Instant
Recovery policy.
Backup and Restore Procedures Briefly explains how to back up and restore files.
Managing Snapshots from the Explains basic procedures for creating and managing
Command Line snapshot images using the NetBackup command-line
interface.
Related Manuals
NetBackup Release Notes
Describes supported platforms and provides operating notes not found in the
manuals or in the online help.
NetBackup Installation Guide
Explains how to install NetBackup.
NetBackup System Administrators Guide, Volumes I and II
Explains how to configure and manage NetBackup.
NetBackup Media Manager Device Configuration Guide for UNIX and Windows
Explains how to add device drivers and perform other system level configuration for
storage devices that are supported by NetBackup and Media Manager.
NetBackup Troubleshooting Guide
Explains NetBackup Advanced Client error codes.
Getting Help
VERITAS offers you a variety of support options.
Preface xv
Getting Help
2. Click the Phone Support icon. A page that contains VERITAS support numbers from
around the world appears.
2. Click the E-mail Support icon. A brief electronic form will appear and prompt you to:
Select a language of your preference
Select a product and a platform
Associate your message to an existing technical support case
Provide additional contact and product information, and your message
Glossary
If you encounter unfamiliar terminology, consult the NetBackup online glossary. The
glossary contains terms and definitions for NetBackup and all additional NetBackup
options and agents.
The NetBackup online glossary is included in the NetBackup help file.
Conventions
The following conventions apply throughout the documentation set.
Product-Specific Conventions
The following term is used in the NetBackup 5.1 documentation to increase readability
while maintaining technical accuracy.
Microsoft Windows, Windows
Terms used to describe a specific product or operating system developed by
Microsoft, Inc. Some examples you may encounter in NetBackup documentation are,
Windows servers, Windows 2000, Windows Server 2003, Windows clients, Windows
platforms, or Windows GUI.
When Windows or Windows servers is used in the documentation, it refers to all of
the currently supported Windows operating systems. When a specific Windows
product is identified in the documentation, only that particular product is valid in that
instance.
For a complete list of Windows operating systems and platforms that NetBackup
supports, refer to the NetBackup Release Notes for UNIX and Windows or go to the VERITAS
support web site at http://www.support.veritas.com.
Preface xvii
Conventions
Typographical Conventions
Here are the typographical conventions used throughout the manuals:
Conventions
Convention Description
GUI Font Used to depict graphical user interface (GUI) objects, such as fields,
listboxes, menu commands, and so on. For example: Enter your
password in the Password field.
Italics Used for placeholder text, book titles, new terms, or emphasis. Replace
placeholder text with your specific text. For example: Replace filename
with the name of your file. Do not use file names that contain spaces.
This font is also used to highlight NetBackup server-specific or operating
system-specific differences. For example: This step is only applicable for
NetBackup Enterprise Server.
Code Used to show what commands you need to type, to identify pathnames
where files are located, and to distinguish system or application text that
is displayed to you or that is part of a code example.
Key+Key Used to show that you must hold down the first key while pressing the
second key. For example: Ctrl+S means hold down the Ctrl key while
you press S.
You should use the appropriate conventions for your platform. For example, when
specifying a path, use backslashes on Microsoft Windows and slashes on UNIX.
Significant differences between the platforms are noted in the text.
Tips, notes, and cautions are used to emphasize information. The following samples
describe when each is used.
Note Used for important information that you should know, but that shouldnt cause any
damage to your data or your system if you choose to ignore it.
Caution Used for information that will prevent a problem. Ignore a caution at your own
risk.
Command Usage
The following conventions are frequently used in the synopsis of command usage.
brackets [ ]
The enclosed command line component is optional.
Vertical bar or pipe (|)
Separates optional arguments from which the user can choose. For example, when a
command has the following format:
command arg1|arg2
In this example, the user can use either the arg1 or arg2 variable.
Select Start > Programs > VERITAS NetBackup > NetBackup Administration
Console.
The corresponding actions could be described in more steps as follows:
4. Move your cursor to the right. First highlight and then click NetBackup
Administration Console.
1. Go to www.support.veritas.com.
Preface xix
NDMP Information on the Web
3. Click Search.
The full title of the document is:
VERITAS NetBackup Advanced Client Configuration and Compatibility
This chapter describes NetBackup Advanced Client and contains the following topics.
Overview
Snapshot Basics
Requirements
Terminology
Note For help with first-time setup of Advanced Client, see the NetBackup Advanced Client
Quick Start Guide.
1
Overview
Overview
Advanced Client combines the features of snapshot backup, FlashBackup, BLI Agent,
offhost backup, and Instant Recovery. It supports clients on UNIX and Windows
platforms, on either Fibre Channel networks (SANs) or traditional LANs.
Snapshots
A snapshot is a disk image of the clients data made almost instantaneously. NetBackup
backs up the data from the snapshot image, not directly from the clients primary data.
This allows client operations and user access to continue without interruption during the
backup.
A snapshot image is required for all features of Advanced Client. A number of methods
are provided for creating snapshots. You can select the snapshot method manually from
the Policy dialog of the NetBackup Administration Console, or allow NetBackup to select
the method for you.
Refer to Snapshot Basics on page 4 for background on snapshot technology.
Offhost Backup
Another major component of NetBackup Advanced Client is support for offhost backup.
Offhost backup shifts the burden of backup processing onto a separate backup agent,
greatly reducing the impact on the clients computing resources ordinarily caused by a
local backup. The client supplies a relatively small amount of mapping information, but
the backup agent does the bulk of the work by sending the clients actual data to the
storage device.
See the following network diagram showing a backup agent.
NetBackup master
server
LAN / WAN
NetBackup
SCSI
client Backup agent
Local client
data storage
Robot on
SAN
Disks of client
data on SAN
Chapter 1, Introduction 3
Snapshot Basics
Instant Recovery
This feature makes backups available for instant recovery from disk. Instant Recovery
combines snapshot technologythe image is created without interrupting user access to
datawith the ability to do rapid snapshot-based restores. The image is retained on disk
and can also be backed up to tape or other storage.
Instant Recovery makes possible three additional variations of restore: block-level restore,
file promotion, and snapshot rollback. For a description, refer to Instant Recovery
Restore on page 184.
FlashBackup
FlashBackup is a policy type that combines the speed of raw-partition backups with the
ability to restore individual files.
BLI
Block level incremental backup extends the capabilities of NetBackup to back up only
changed data blocks of Oracle database files. Refer to the NetBackup for Oracle System
Administrators Guide for more details.
Note NetBackup for NDMP add-on software is required, and the NAS vendor must
support snapshots.
Snapshot Basics
Large active databases or file systems that must be available around-the-clock are difficult
to back up without incurring a penalty. Often, the penalty takes one of two forms:
The entire database is taken offline or the file system is unmounted, to allow time for
the backup, resulting in suspension of service and inconvenience to users.
The copy is made very quickly but produces an incomplete version of the data, some
transactions having failed to complete.
A solution to this problem is to create a snapshot of the data. This means capturing the data
at a particular instant, without causing significant client downtime. The resulting capture
or snapshot can be backed up without affecting the performance or availability of the file
system or database. Without a complete, up-to-date snapshot of the data, a correct backup
cannot be made.
When a backup is managed by a backup agent on a Fibre Channel network, the data to
back up must be contained in a snapshot. The backup agent can only access the data by
means of the raw physical disk. Once the data is captured as a snapshot, the NetBackup
client maps the logical representation of the data to its absolute physical disk address.
These disk addresses are sent to the backup agent over the LAN and the data is then read
from the appropriate disk by the backup agent. (This process is explained in greater detail
under Offhost Backup Overview later in this chapter.)
Two types of snapshots are available, both supported by NetBackup: copy-on-write, and
mirror.
Copy-on-Write
A copy-on-write type of snapshot is a detailed account of data as it existed at a certain
moment. A copy-on-write is not really a copy of the data, but a specialized record of it.
A copy-on-write snapshot is created in available space in the clients file system or in a
designated raw partition, not as a complete copy of the client data on a separate or mirror
disk. The snapshot is then backed up to storage as specified in the backup policy. Users
can access their data without interruption, as though no backup is taking place. The file
system is paused just long enough to assemble a transactionally consistent record.
For a description of the copy-on-write process, see How Copy-on-Write Works on
page 249. Note that copy-on-write snapshots are called Storage Checkpoints in VxFS.
Benefits of copy-on-write:
Consumes less disk space: no need for secondary disks containing complete copies of
source data.
Relatively easy to configure (no need to set up mirror disks).
Chapter 1, Introduction 5
Snapshot Basics
Mirror
Unlike a copy-on-write, a mirror is a complete data copy stored on a separate disk,
physically independent of the original. Every change or write to the data on the primary
disk is also made to the copy on the secondary disk. This creates a mirror image of the
source data.
NetBackup
client
As in a copy-on-write, transactions are allowed to finish and new I/O on the primary disk
is briefly halted. When the mirror image is brought up-to-date with the source (made
identical to it), the mirror is split from the primary, meaning that new changes can be
made to the primary but not to the mirror. At this point the mirror can be backed up (see
next diagram).
NetBackup
client
Mirror is split from primary:
further writes to primary
are not made to mirror.
If the mirror will be used again it must be brought up-to-date with the primary volume
(resynchronized). During resynchronization, the changes made to the primary
volumewhile the mirror was splitare written to the mirror.
Since mirroring requires an exact, complete copy of the primary on a separate device
(equal in size to the disk being mirrored), it consumes more disk space than a
copy-on-write.
Benefits of mirror:
Has less impact on the performance of the application or database host being backed
up (NetBackup client), because there is no need to run the copy-on-write mechanism.
Allows faster backups: the backup process reads data from a separate disk (mirror)
operating independently of the primary disk that holds the clients source data. This
means that, unlike the copy-on-write, there is no need to share disk I/O with other
processes or applications. Apart from NetBackup, no other applications have access to
the mirror disk. During a copy-on-write, the source data can be accessed by other
applications as well as by the copy-on-write mechanism.
Note If additional disk drives are available and volumes have been configured with the
VERITAS Volume Manager, a mirror snapshot method is usually a good choice.
Snapshot methods
Advanced Client supports a number of methods for creating a snapshot image. You can
select the method or let NetBackup select it based on your environment. The snapshot
methods are described in the Policy Configuration chapter.
Chapter 1, Introduction 7
Local Backup of Snapshot
NetBackup master
server
1
LAN / WAN
2 3
SCSI SCSI
4
Clients disks storage
NetBackup client NetBackup
media server
1. Client backup is initiated by master server, which tells the NetBackup client to create the
snapshot data on the disk.
2. Client sends the data to the media server.
3. Media server processes the backup and reads the client data.
4. Media server writes data to local storage.
Chapter 1, Introduction 9
Offhost Backup Methods
NetBackup
master server
LAN / WAN
Data sharing
(mirrors or
replication)
NetBackup primary NetBackup alternate NetBackup
client client media server
Split mirror
In this configuration, the alternate client has access to mirror disks containing the primary
clients data. Before the backup, the mirror is split from the primary disk and the snapshot
is made on the mirror disk. The alternate client has access to the mirror disk, and the
media server can back up the alternate client. After the backup, the mirror can be
optionally resynchronized with the primary disk.
Note The mirror disk need not be visible to the primary client, only to the alternate client.
Split Mirror, Conceptual View: Primary client and alternate client share data through mirroring.
NetBackup master
server
LAN / WAN
Chapter 1, Introduction 11
Offhost Backup Methods
Split Mirror implementation: Alternate client performs backup for multiple primary clients.
LAN / WAN
Solaris
Shared
Fibre Channel/SAN Storage
Alternate client/
HP media server,
clients with SSO
HP
Replication: Primary client and alternate client share data through replication
NetBackup master
server
LAN / WAN
The above configuration is supported by the VVR snapshot method for UNIX clients only,
and requires the VERITAS Volume Manager (VxVM version 3.2 or later) with the VVR
license.
Chapter 1, Introduction 13
Offhost Backup Methods
Note If you have a multi-ported SCSI disk array, a fibre channel SAN is not required. See
Offhost Backup Without a SAN (UNIX Only) on page 16.
NetBackup
master
server
1 LAN / WAN
client media
server
3 4
Fibre Channel/SAN
Robot on
SAN
Disks of client
data on SAN
1. On LAN, client backup is initiated by master server, which tells the NetBackup client to
map the snapshot data on the disk.
2. On LAN, client sends the mapping information to the media server.
3. Media server processes the backup and reads client data over the SAN, from the
addresses specified by the client.
4. Media server writes data across the SAN to storage.
Third-Party Copy
NetBackup
master
server
1 LAN / WAN
media
client
server
Fibre Channel/SAN
4 5
Disks of client Third-party robot on
data on SAN copy device SAN
SCSI
1. On LAN, client backup is initiated by master server, which tells the client to map the
snapshot data.
2. On LAN, client sends the mapping information to the media server.
3. Media server sends third-party copy commands to the third-party copy device over the SAN.
4. Third-party copy device reads the client data from either SAN-attached or SCSI-attached disk.
5. Third-party copy device writes data to SAN-attached or SCSI-attached storage.
Chapter 1, Introduction 15
Offhost Backup Without a SAN (UNIX Only)
NetBackup master
server
1 LAN / WAN
2
3 4
SCSI
SCSI
1. Client backup is initiated by master server, which tells the NetBackup client to map the
snapshot data on the disk.
2. Client sends the mapping information to the media server.
3. Media server processes the backup and reads client data from the addresses specified
by the client.
4. Media server writes data to storage.
NetBackup master
server
LAN / WAN
media
server
Client
SCSI SCSI
Fibre Channel/SAN
Third-party Storage
copy device on SAN
SCSI
Disk array
on SAN
Chapter 1, Introduction 17
Features and Required Software
Note Advanced Client supports the following policy types: DB2, FlashBackup,
FlashBackup-Windows, MS-Exchange, MS-SQL-Server, MS-Windows-NT, Oracle,
and Standard.
Chapter 1, Introduction 19
Note Advanced Client snapshot methods and offhost backup methods perform mapping
of the underlying file system and volume structure. This mapping has been verified
for the I/O system components listed in this table under Data Type Supported.
The use of other components in the I/O system, such as other volume managers or
storage replicators, may result in an unreliable backup. Such configurations are not
supported. For an updated list of supported configurations, see Advanced Client
Information on the Web on page xix.
Chapter 1, Introduction 21
Features and Required Software
1. For Windows, VxVM 3.1 or later is required, with all the latest VxVM service packs and
updates.
2. Supported raw disks are SCSI (local or fibre channel attached), with sd, dad, and ssd
drivers (Solaris) or sdisk drivers (HP).
Note The VSP and VSS snapshot types, for Windows Open File Backup, are included
with the base NetBackup product, not Advanced Client.
Chapter 1, Introduction 23
Requirements
Requirements
NetBackup Advanced Client requires the following components:
A master server with NetBackup Advanced Client server software installed.
Clients running Solaris 7, 8, or 9, HP 11.00 or 11i, or Windows 2000 or 2003, with
NetBackup Advanced Client software installed.
Note Certain operating system and device patches (such as for the host bus adapter) may
be required for both servers and clients. To obtain the latest information, refer to
Advanced Client Information on the Web on page xix.
Restrictions
For a complete list of supported peripherals, and for other operational notes, refer to the
NetBackup Release Notes, or to Advanced Client Information on the Web on page xix.
Note the following restrictions:
Advanced Client does not support the ALL_LOCAL_DRIVES entry in the policys
Backup Selections list.
Although the NetBackup 5.1 Administration Console can be used to administer a
NetBackup 4.5 server, please note:
Chapter 1, Introduction 25
Terminology
Terminology
This section introduces terms used with NetBackup Advanced Client. For explanations of
other NetBackup terms, consult the NetBackup online glossary. For instructions, see the
Glossary section in the preface.
A general term for the host that manages the backup on behalf of the NetBackup client.
This is either another client, the NetBackup media server, a third-party copy device, or a
NAS filer.
BCV
The mirror disk in an EMC primary-mirror array configuration (see mirror). BCV stands
for Business Continuance Volume.
Bridge
In a SAN network, a bridge connects SCSI devices to Fibre Channel. A third-party copy
device can be implemented as part of a bridge or as part of other devices. Note that not all
bridges function as third-party copy devices.
BusinessCopy
Copy-on-Write
In NetBackup Advanced Client, one of two types of supported snapshots (see also mirror).
Unlike a mirror, a copy-on-write does not create a separate copy of the clients data. It
creates a block-by-block account that describes which blocks in the client data have
changed and which have not, from the instant the copy-on-write was activated. This
account is used by the backup application to create the backup copy.
Data movement
Data mover
The host or entity that manages the backup on behalf of the NetBackup client. This is
either the NetBackup media server, a third-party copy device, or a NAS filer.
Disk group
Extent
A contiguous set of disk blocks allocated for a file and represented by three values: device
identifier, starting block address (offset in the device) and length (number of contiguous
blocks). The mapping methods in Advanced Client determine the list of extents and send
the list to the backup agent.
FastResync
Fibre channel
A type of high-speed network composed of either optical or copper cable and employing
the Fibre Channel protocol. NetBackup Advanced Client supports both arbitrated loop
and switched fabric (switched fibre channel) environments.
File system
Instant Recovery
A restore feature of a disk snapshot of a client file system or volume. Client data can be
rapidly restored from this disk image, even after a system reboot.
Mapping
The process of converting a file or raw device (in the file system or Volume Manager) to
absolute physical disk addresses or extents for use by backup agents on the network.
NetBackup Advanced Client uses the VxMS library to perform file mapping.
Chapter 1, Introduction 27
Terminology
Mapping methods
A set of routines for converting logical file addresses to absolute physical disk addresses
or extents. NetBackup Advanced Client includes support for file-mapping and
volume-mapping methods.
Mirror
A disk that maintains an exact copy or duplicate of another disk. A mirror disk is
often called a secondary, and the disk that it copies is called the primary. All writes to
the primary disk are also made to the mirror (or secondary) disk.
A type of snapshot captured on a mirror disk. At an appropriate moment, all further
writes to the primary disk are held back from the mirror, thus causing the mirror to be
split from the primary. As a result of the split, the mirror becomes a snapshot of the
primary. The snapshot can then be backed up.
Offhost backup
Primary disk
In a primary-mirror array configuration, the primary is the disk on which client data is
stored, and which is directly accessed by client applications. An exact duplicate of the
primary disk is the mirror.
Raw partition
A single section of a raw physical disk device occupying a range of disk sectors, without a
file system or other hierarchical organization scheme (thus, a raw stream of disk
sectors). This is different from a block device, over which the file system is mounted.
Oracle's backup and recovery program. RMAN performs backup and restore by making
requests to a NetBackup shared library.
An extension to the Oracle8i Media Management API which enables media management
software such as NetBackup to perform data transfer directly.
A Fibre Channel-based network connecting servers and storage devices. The storage
devices are not attached to servers but to the network itself, and are visible to all servers
on the network.
Secondary disk
See mirror.
ShadowImage
Snapshot
A stable disk copy of the data prior to backup. A snapshot is created very rapidly, causing
minimal impact on other applications. There are two basic types: copy-on-write and mirror.
Snapshot method
A set of routines for creating a snapshot. You can select the method, or let NetBackup
select it when the backup is started (auto method).
Snapshot mirror
A disk mirror created by the VERITAS Volume Manager (VxVM). This is an exact copy of
a primary volume at a particular moment, reproduced on a physically separate device.
Snapshot source
The entity (file system, raw partition, or logical volume) to which a snapshot method is
applied. NetBackup automatically selects the snapshot source based on the entries in the
policys Backup Selections list.
Snapshot Volume
A mirror that has been split from the primary volume or device, and made available to
users. Snapshot volumes are created by the VERITAS Volume Manager (VxVM) as a
point-in-time copy of the primary volume. Subsequent changes made to the primary
volume are recorded in the Data Change Log and can be used later to resynchronize with
the primary volume by means of VxVM FastResync. Only the changes made while the
snapshot volume was detached from the primary would be applied to the snapshot
Standard device
Refers to the primary disk in an EMC primary-mirror disk array (see primary disk).
Chapter 1, Introduction 29
Terminology
Storage Checkpoint
Provides a consistent and stable view of a file system image and keeps track of modified
data blocks since the last checkpoint. Unlike a mirror, a Storage Checkpoint does not
create a separate copy of the primary or original data. It creates a block-by-block account
that describes which blocks in the original data have changed and which have not, from
the instant the checkpoint was activated.
A Storage Checkpoint stores its information in available space on the primary file system,
not on a separate or designated device. (Also, the ls command does not list Storage
Checkpoint disk usage; you must use the fsckptadm list command instead.)
TimeFinder
One of several snapshot methods included in Advanced Client. TimeFinder is for making
snapshots of client data on EMC Symmetrix disk arrays.
This is the UNIX File System (UFS), which is the default file system type on Sun Solaris.
The UFS file system was formerly the Berkeley Fast File System.
Volume
A virtual device configured over raw physical disk devices (not to be confused with a
NetBackup Media Manager volume). Consists of a block and character device.
If a snapshot source exists over a volume, NetBackup automatically uses a volume
mapping method to map the volume to physical device addresses. Any of the Advanced
Client snapshot methods can be used when backing up client data configured over volumes.
For NetBackup, volumes must be created by means of the VERITAS Volume Manager
(VxVM).
Volume group
A logical grouping of disks, created with the VERITAS Volume Manager, to allow more
efficient use of disk space.
VxFS
The VERITAS extent-based File System (VxFS), designed for high performance and large
volumes of data.
VxVM
The VERITAS Volume Manager (VxVM), which provides logical volume management
that can be used in SAN environments.
Chapter 1, Introduction 31
Terminology
This chapter explains how to install NetBackup Advanced Client software on UNIX and
Windows platforms.
Prerequisites
NetBackup Enterprise server 5.1 or later must be installed on the master/media
servers. For performing local backups, the master/media server can be running any
supported UNIX or Windows platform. For NetBackup Media Server or Third-Party
Copy Device backups, the NetBackup media server must be installed on Solaris 7, 8,
or 9, or HP-UX 11.00 or 11i.
For a detailed list of platform versions supported by NetBackup Advanced Client,
refer to the NetBackup Release Notes, or to Advanced Client Information on the Web on
page xix.
NetBackup 5.1 or later client software must be installed on clients running Solaris 7, 8,
or 9, HP-UX 11.00 or 11i, or Windows 2000 or 2003.
For Instant Recovery using the VxFS_Checkpoint method, the VxFS File System with
the Storage Checkpoints feature must be installed on clients.
33
Installing Advanced Client On UNIX
2. In a separate window, make sure a valid license key for NetBackup Advanced Client
has been installed. To do this, enter the following command to list and add keys:
/usr/openv/netbackup/bin/admincmd/get_license_key
3. Insert the options CD-ROM containing the Advanced Client software in the drive.
5. To install Advanced Client software on the NetBackup master server, execute the
following:
./install
Since other NetBackup products are included on the CD-ROM, a menu appears.
7. Enter q to quit selecting options. When asked if the list is correct, answer y.
NetBackup Advanced Client software is installed in
/usr/openv/netbackup/vfms/hardware/os/version/
Where:
hardware is Solaris, HP9000-700, or HP9000-800
os is Solaris7, Solaris8, Solaris9, or HP-UX11.00 or HP-UX11.11
8. In a clustered environment, the above steps must be done on each node in the cluster.
Note If you are installing in a cluster environment, unfreeze the active node after the
installation completes. For information about unfreezing a service group, see the
clustering section in the NetBackup High Availability System Administrators Guide for the
cluster software you are running
Note The NetBackup 5.1 client software must be installed on the clients before
performing the next procedure. For instructions, refer to the NetBackup Installation
Guide for UNIX.
Note If installing in a clustered environment, you must do the following on the active
node of the cluster.
You should also perform this procedure if you are doing either of the following:
Execute the following as the root user on the NetBackup master server.
If more than one bprd appears, wait until the backups and/or restores are complete
and then run the /usr/openv/netbackup/bin/bpps command again. When only
one bprd shows up, terminate the bprd daemon.
Chapter 2, Installation 35
Installing Advanced Client On UNIX
2. You can distribute the Advanced Client software to Solaris and HP clients in either of
two ways:
Note If you are distributing the Advanced Client software to clients located in a cluster,
specify the host names of the individual nodes (not virtual names) in the list of
clients.
a. Distribute the software to all currently defined clients by executing the following
command:
/usr/openv/netbackup/bin/update_clients -Install_ADC
For example:
Solaris Solaris7 othersparc
or
Solaris Solaris8 othersparc
or
Solaris Solaris9 othersparc
or
HP9000-800 HP-UX11.11 myhp
In each backup policy, clients must be configured with their correct OS type.
For example, an HP client running HP-UX11.11 must be configured as such.
Execute the following command (all on one line):
/usr/openv/netbackup/bin/update_clients -Install_ADC -ClientList file
Where file is the name of the file that you created in the previous step.
3. Start the NetBackup daemon as the root user on the master server by executing:
/usr/openv/netbackup/bin/initbprd
Note If you are installing in a cluster environment, you must freeze the active node before
you begin the installation process so that migrations do not occur during
installation. For information about freezing a service group, see the clustering
section in the NetBackup High Availability System Administrators Guide for the cluster
software you are running.
1. Log in.
c. To register a new key, click the star icon to open the Add a New License Key
dialog. Type the new license key in the New license key field and click Add.
The new license key appears in the lower part of the dialog box.
3. In a clustered environment, the above steps must be done on each node in the cluster.
Note If you are installing in a cluster environment, unfreeze the active node after the
installation completes. For information about unfreezing a service group, see the
clustering section in the NetBackup High Availability System Administrators Guide for the
cluster software you are running.
Chapter 2, Installation 37
Distributing Client Software in Mixed-Platform Environments
In NetBackup 4.5 GA and 4.5 maintenance packs, snapshot (frozen image) configuration
was client based: a single frozen image configuration file applied to all policies backing up
the client. There was no limitation on the number of frozen image methods used within a
Chapter 2, Installation 39
Uninstalling NetBackup Advanced Client or Earlier Packages
policy to back up a client. This meant that a NetBackup client could be configured with
multiple snapshot methods within the same policy, one method for each snapshot source
designated on the 4.5 Frozen Image Client Configuration display.
When Advanced Client is installed, these 4.5 GA policies are changed as follows:
The snapshot (frozen image) methods specified for the clients in a policy are replaced
with the auto method. The auto method means that NetBackup selects an
appropriate snapshot method at the start of the backup.
and list (ls) the contents of the client_name directory. The configuration information is
contained in files named fi_info.
Note The following procedure results in total removal of the Advanced Client software.
Note If you are uninstalling in a cluster environment, you must first freeze the active
node so that migrations do not occur during the uninstall. For information about
freezing a service group, see the clustering section in the NetBackup High Availability
System Administrators Guide for the cluster software you are running.
On Solaris:
1. Check the Activity Monitor in the NetBackup Administration Console to make sure
no NetBackup Advanced Client (or pre-5.0 ServerFree Agent or FlashBackup)
backups are active or running (the State field should read Done).
pkgrm VRTSnbadc
3. To remove ServerFree Agent packages installed prior to 5.0, execute the following:
For the Offhost and SAN Data Movement Services product:
pkgrm VRTSnbodm
For the Extended Frozen Image Services product:
pkgrm VRTSnbefi
For the Core Frozen Image Services product:
pkgrm VRTSnbfis
For the FlashBackup product:
pkgrm VRTSnbfsh
On HP:
1. Check the Activity Monitor in the NetBackup Administration Console to make sure
no NetBackup Advanced Client (or pre-5.0 ServerFree Agent or FlashBackup)
backups are active or running (the State field should read Done).
Note If you are uninstalling in a cluster environment, unfreeze the active node after the
uninstall. For information about unfreezing a service group, see the clustering
section in the NetBackup High Availability System Administrators Guide for the cluster
software you are running.
Chapter 2, Installation 41
Uninstalling NetBackup Advanced Client or Earlier Packages
2. Perform the basic uninstall procedure described in the NetBackup Installation Guide for
Windows.
Note This chapter applies to the NetBackup Media Server and Third-Party Copy Device
backup methods only. If your backup policies are not using either of these methods,
you may skip this chapter.
Configuration Flowcharts
43
SAN Configuration Diagram
Third-party
copy device
SCSI
Tape on
SAN
Disk array
on SAN
Tape behind
Disk array behind third-party
third-party copy copy device
device
Supported Peripherals
A complete list of Advanced Client supported peripherals can be found on the VERITAS
support web site. For instructions, refer to Advanced Client Information on the Web on
page xix.
Note If you have a multi-ported SCSI disk array, a fibre channel SAN is not required. See
Offhost Backup Without a SAN (UNIX Only) on page 16.
NetBackup
master server
LAN / WAN
3pc.conf file on media server:
Contains client disk information.
NetBackup NetBackup Use bptpcinfo command to
client media server create the 3pc.conf file.
Robot on
Fibre Channel SAN
SAN
Third-party
copy device
(passive*)
SCSI
Third-Party Copy
Robot on
Fibre Channel SAN
SAN
Third-party
copy device
(active*)
SCSI
Fibre Channel
Zone 2 Third-party
Switch 2 copy device
(active)
SCSI
Client disks in
zone 1
Client Disks in
zone 2 Client Disks
in zone 1
Configuration Flowcharts
The following four charts show the process for setting up configuration files for Media
Server or Third-Party Copy backup. Instructions are included later in this chapter.
Run bptpcinfo -a -o -
N
All OS device paths Go to Chart II.
visible?
Y
Run ioscan -nf on HP
or sgscan on Solaris
Y SeeAdvanced Client
Have N Information on the Web on
Done. No more SANPoint page xix for help obtaining
device configuration Control? world-wide port names.
is required. Y
N
Tape devices On Solaris: Correct
visible? the st.conf file
SeeAdvanced Client
Third-party N Information on the Web on
copy device page xix for help enabling the
visible? third-party copy device.
SeeAdvanced Client
Devices behind N Information on the Web on
third-party copy page xix for help discovering
device visible? LUNs and world-wide names.
Perform a reconfigure
reboot. Go to Chart I.
Storage target Y
IDs bound to world- Done. Go to Chart IV.
wide name?
Reboot.
Go to Chart I.
Using
Third-Party N Done. No further
Copy Device configuration needed.
method?
Identification
Y
descriptors (i=)
available for all
devices?
Run bpmoverinfo to
create the mover.conf file.
Done.
2. Examine the bptpcinfo output to see if your OS device paths are listed. If all devices
are listed, go to step 8 for HP or to step 9 for Solaris.
3. For Solaris: If your tape devices are not listed in the bptpcinfo output, make sure
you have target and LUN values for each tape device in the st.conf file.
4. For Solaris: If your disks are not listed in the bptpcinfo output, make sure you have
target and LUN values for each disk in the sd.conf file.
5. If the devices behind the bridge (or third-party copy device) are not listed in the
bptpcinfo output, or if the third-party copy device is not enabled for third-party
copy data movement, consult the VERITAS support website for assistance (see
Advanced Client Information on the Web on page xix).
6. On the bridge or third-party copy device, set the address mapping mode to FIXED.
This prevents the addresses from changing when the devices are reset. For help
configuring supported third-party copy devices, go www.support.veritas.com.
(See Advanced Client Information on the Web on page xix.)
7. Enter the following to reboot the operating system on the media server:
Solaris:
reboot -- -r
HP:
reboot
a. If all devices now appear, enter the following to regenerate HP special files:
insf -e
Then go to step 10 on page 55.
b. If some devices do not appear in the ioscan output, check hardware connections
to the devices that are not appearing. Then repeat step 8.
Note On HP, there is a limit of eight devices per target. For instance, if you have a JBOD
disk array consisting of ten disks, and the array is connected to a bridge, it may be
that only the first eight disks in the array are accessible.
9. For Solaris:
a. Perform an sgscan to list all passthru devices. Check for proper output and
recognition of devices.
Here is sample output from sgscan:
/dev/sg/c0t6l1: Disk (/dev/rdsk/c1t6d1): "SEAGATE ST39175LW"
b. If tape devices still do not show up, make sure you have entries for all SCSI target
and LUN combinations in the sg.links and sg.conf files. Refer to the Media
Manager Device Configuration Guide, Chapter 2, under Understanding the SCSI
Passthru Drivers.
If tape devices are fibre attached, make sure you have entries for the tape
devices in the above files.
If tape devices are behind a bridge (or third-party copy device), make sure
you have entries for the tape devices AND for the bridge/third-party copy
device.
For an example, refer to Solaris only: Example for sg.links, sg.conf, and st.conf
files on page 57.
If you are unsure how to acquire the SCSI target and LUN values for your
configuration, see Advanced Client Information on the Web on page xix for
help with particular devices. For instance, if your tape drives are configured
behind a bridge, router or other fibre-channel device, you may need to telnet into
the device to determine the target ID and LUN for each tape drive.
c. When finished updating the sg.links, sg.conf, and st.conf files, remove
the old sg configuration:
rm /kernel/drv/sg.conf
rem_drv sg
10. Run the bptpcinfo command again to see which devices are now visible to the
media server. Repeat at step 2 if any of your SAN devices are not showing up in the
bptpcinfo command output.
11. If the offhost backup method is NetBackup Media Server, no more device
configuration is required. You can skip the rest of this chapter.
12. When all devices are listed in the bptpcinfo command output, use that information
to fill in the device pathname (p=), serial number (s=), and LUN (l=) in the Device
Checklist on page 58 for each device.
13. You can use VERITAS SANPoint Control to determine the world-wide port names for
the devices.
b. Click on the Storage tab in the left pane, then click on a device in the left pane
(you may have to drill down in the tree).
c. Click the Connectivity tab to find the world-wide port name of the device (Port
WWN). Repeat these steps for each device.
14. Update the Device Checklist on page 58 with the world-wide port names of your
devices.
Note It is important to record this information! It will be needed again, to complete the
configuration.
15. For Solaris: continue with Solaris only: Configure HBA Drivers on page 59. For HP,
continue with Create Backup Configuration Files on page 60.
type=ddi_pseudo;name=sg;addr=6,1; sg/c\N0t6l1
type=ddi_pseudo;name=sg;addr=6,4; sg/c\N0t6l4
type=ddi_pseudo;name=sg;addr=6,5; sg/c\N0t6l5
For each tape drive, add a name entry to the st.conf file.
Here is an example name entry:
name="st" class="scsi" target=6 lun=4;
Make sure you have entries for all target and bus combinations for each device.
Device Checklist
Use this checklist or one like it to record information about each of your SAN devices.
Some of this information is provided by the bptpcinfo command (such as device
pathname and serial number), and some has to be obtained by other means as explained
in these procedures. It is vital that the information be recorded accurately.
Type of Device pathname used by UNIX Serial number LUN World-wide port name (w=)
Device host (p=) (s=) (l=)
(disk or
tape)
Note Each time a device is added or removed, the binding must be updated to reflect the
new configuration.
If storage device SCSI target IDs are bound to world-wide port names in your HBA
configuration file, skip this section and go to Create Backup Configuration Files on
page 60.
1. If storage device target IDs are not already bound to world-wide port names, refer to
your Device Checklist on page 58 (filled out in the previous procedure) for the
world-wide names. Use the world-wide names to make the binding for each device.
2. Update your HBA configuration file by binding all SCSI device target IDs to their
associated world-wide port name.
For assistance with your particular HBA file, see Advanced Client Information on
the Web on page xix.
4. To ensure device visibility, repeat the steps described under Verify NetBackup
Access to SAN Devices on page 53.
When you are finished, the bptpcinfo command should list device pathnames and
serial numbers for all of your devices. Update the Device Checklist with those
values if needed.
Note At the start of the backup, NetBackup creates a 3pc.conf file if one does not exist. If
all devices support identification descriptors, you do not need to create or edit the
3pc.conf file. You can skip to mover.conf Description on page 65 and to Create
the mover.conf File on page 77.
3pc.conf Description
In the 3pc.conf file, each SAN device needs a one-line entry containing several kinds of
values. The values required depend on several factors (explained below). Typically, these
include (but are not limited to) the device ID, host-specific device path, and serial number.
One or more of the following are also required: the identification descriptor, logical unit
number (LUN) and world-wide port name. See Determining Requirements on page 64.
Some of this information will be automatically discovered and filled in by the bptpcinfo
command, as described under What bptpcinfo Automatically Provides on page 65. The
procedure for using the bptpcinfo command is under Create the 3pc.conf File on
page 75.
Example 3pc.conf
# devid [a=wwpn:lun] [c=client] [p=devpath] [P=clientpath] [s=sn] [l=lun] [w=wwpn] [W=wwpn] [i=iddesc]
0 p=/dev/rdsk/c0t0d0s2 s=FUJITSU:MAB3091SSUN9.0G:01K52665 l=0
1 p=/dev/rdsk/c0t10d0s2 s=FUJITSU:MAG3091LSUN9.0G:00446161 l=0
2 p=/dev/rdsk/c4t0d0s2 s=HP:OPEN-3:30436000000 l=0 a=500060E80276E401:0
3 p=/dev/rdsk/c4t1d0s2 s=FUJITSU:MAN3367MSUN36G:01X37938 l=0 a=100000E00221C153:0
4 p=/dev/rdsk/c4t3d0s2 s=HITACHI:OPEN-3-CM:20461000000 l=0 i=10350060E800000000000004FED00000000 a=50060E80034FED00:0
5 p=/dev/rdsk/c4t14d0s2 s=HITACHI:OPEN-9:60159003900 l=0 w=500060e802eaff12
6 p=/dev/rdsk/c4t0d1s2 s=HP:OPEN-3:30436000100 l=1 a=500060E80276E401:1 a=1111222233334444:0
7 p=/dev/rdsk/c4t0d2s2 s=HP:OPEN-3:30436000200 l=2 a=500060E80276E401:2
8 p=/dev/rdsk/c4t0d3s2 s=HP:OPEN-3:30436000300 l=3 a=500060E80276E401:3
9 p=/dev/rdsk/c4t0d4s2 s=HP:OPEN-3-CM:30436005100 l=4 a=500060E80276E401:4
10 p=/dev/rdsk/c4t0d5s2 s=HP:OPEN-3:30436002600 l=5 a=500060E80276E401:5
11 p=/dev/rdsk/c4t0d6s2 s=HP:OPEN-3:30436002700 l=6 a=500060E80276E401:6
12 p=/dev/rdsk/c4t0d7s2 s=HP:OPEN-3:30436002800 l=7 a=500060E80276E401:7
13 p=/dev/rdsk/c4t0d8s2 s=HP:OPEN-3:30436002900 l=8 a=500060E80276E401:8
14 p=/dev/rdsk/c4t1d1s2 s=FUJITSU:MAN3367MSUN36G:01X37958 l=1 a=100000E00221C153:1
15 p=/dev/rdsk/c4t1d2s2 s=FUJITSU:MAN3367MSUN36G:01X38423 l=2 a=100000E00221C153:2
16 p=/dev/rdsk/c4t1d3s2 s=FUJITSU:MAN3367MSUN36G:01X38525 l=3 a=100000E00221C153:3
17 p=/dev/rdsk/c4t1d4s2 s=FUJITSU:MAN3367MSUN36G:01X37951 l=4 a=100000E00221C153:4
18 p=/dev/rdsk/c4t1d5s2 s=FUJITSU:MAN3367MSUN36G:01X39217 l=5 a=100000E00221C153:5
19 p=/dev/rdsk/c4t3d1s2 s=HITACHI:OPEN-3-SUN:20461000300 l=1 i=10350060E800000000000004FED00000003 a=50060E80034FED00:1
20 p=/dev/rdsk/c4t3d2s2 s=HITACHI:OPEN-3-SUN:20461000400 l=2 i=10350060E800000000000004FED00000004 a=50060E80034FED00:2
The 3pc.conf file can contain the following types of entries (keyword, if any, is in
parentheses):
device ID (devid)
A unique NetBackup number for the device. In the 3pc.conf file, the device ID numbers
need not be in sequential order, but must be unique.
address (a=wwpn:lun)
The world-wide port name and lun as provided by the bpSALinfo command (see step 3
on page 76 for information on this command). For a device that has multiple FC ports,
there can be multiple a= entries.
Note The disk devices must support SCSI serial-number inquiries or page code 83
inquiries. If a page code inquiry returns an identification descriptor (i=) for a disk,
the serial number is not required.
lun (l=lun)
The devices logical unit number. The LUN allows NetBackup to identify devices that are
attached by SCSI connection to the third-party copy device, bridge, or other SAN device,
or that are directly attached to the fibre channel.
Note w=wwpn is allowed for backwards compatibility with 4.5 clients. If you run the
bpSALinfo or bptpcinfo command, the w=wwpn entry will be converted to
a=wwpn:lun.
The devices fibre channel world-wide port name, which identifies the device on the SAN.
This is a 16-digit identifier, consisting of an 8-digit manufacturer name, and an 8-digit
device name (numeric).
The following is an example message showing a world-wide name for a device, written to
the /var/adm/messages log on the server. Note there are two versions of the
world-wide name: the node wwn and port wwn. For Advanced Client, use the port wwn.
On some devices, the world-wide port name can be found on the back of the device or in
the boot-time messages written to the /var/adm/messages log on the NetBackup
media server.
Note W=wwpn is allowed for backwards compatibility with 4.5 clients. If you run the
bpSALinfo or bptpcinfo command, the W=wwpn entry will be converted to
a=wwpn:lun.
A disk in a disk array can be assigned to multiple Fibre Channel ports. This is done, for
instance, for load balancing or redundancy, to allow other devices on the SAN to access
the same disk through different ports. The two ports allow NetBackup to select the port by
which the storage device will access the disk.
In such a configuration, while w= specifies the first world-wide port name, W= specifies
the second world-wide port name for the disk. (Note the uppercase W in W=wwpn.)
when you run the bptpcinfo command. See Determining Requirements for more
Determining Requirements
The following determines which values are required in the 3pc.conf file.
identification descriptor
The identification descriptor is optional, and is not supported by all vendors. (To produce
this descriptor, the device must support a page code inquiry with a type 2 or 3 descriptor
of less than 20 bytes.) The NetBackup bptpcinfo command (explained below) will
detect the devices identification descriptor and place it in the 3pc.conf file if the
identification descriptor is available.
Even when this descriptor is available, some third-party copy devices do not support its
use.
Note If an identification descriptor is available and the third-party copy device supports
it, the descriptor is used to identify the device on the SAN; in this case, there is no
need for the LUN or world-wide name. To determine whether your third-party
copy device supports identification descriptors, go to www.support.veritas.com
(see Advanced Client Information on the Web on page xix).
mover.conf Description
The /usr/openv/volmgr/database/mover.conf file identifies the third-party copy
devices that NetBackup can use for the Third-Party Copy Device backup method. This file
is needed for the Third-Party Copy Device backup method only.
You can use the bpmoverinfo command to create the mover.conf file (see Create the
mover.conf File on page 77). In most cases, the bpmoverinfo command makes the
appropriate entry in the mover.conf file and no further configuration is needed. The
next few sections describe the types of entries that can be made in mover.conf, in case
manual configuration is needed.
/dev/sg/c0t16l0
Tape 1 Third-party
copy device
In this example, to use the third-party copy device to send the backup to Tape 1, the mover.conf
file must include the passthru device path of the third-party copy device: /dev/sg/c0t16l0.
Note The /dev/rmt/device_name path will be used if it matches the drive path that
NetBackup selects for the backup. As a rule, this is not a problem, since the
bpmoverinfo command detects all available third-party copy devices (and any
tape devices behind them) and enters them in the mover.conf file. See Create the
mover.conf File on page 77.
Third-party Third-party
copy device copy device
SCSI
Tape 1 Tape 3 /dev/rmt/3cbn
Tape 2 /dev/rmt/2cbn
In this example, to use a third-party copy device to send the backup to Tape 2 or to Tape3, the
mover.conf file can specify the device_name of the tape drive: /dev/rmt/2cbn or /dev/rmt/3cbn.
To use Tape 1, the mover.conf file would need the passthru device path of a third-party copy
device.
Note To use a tape unit as a third-party copy device (such as Tape 3), a SCSI passthru
device path must have been configured for that tape device.
On HP:
/dev/sctl/c6t1l0
That is all you need in the mover.conf file. In most cases, you can use the
bpmoverinfo command to provide this entry.
You can use the following command to make sure the sg driver device path is correct.
This command is also useful for creating the mover.conf.policy_name or
mover.conf.storage_unit_name version of the mover.conf file (see Naming the
Mover File on page 73).
/usr/openv/volmgr/bin/sgscan
Here is some sample sgscan output showing third-party copy devices (see notes
following):
/dev/sg/c0t16l0: Mover: "ADIC Scalar SNC"/
Notes:
CNSi indicates a Chaparral device.
The number of entries returned for Crossroads depends on how many
controller LUNS have been configured on that device. The mover.conf file
must contain the /dev/sg path for each controller LUN that is configured on
the Crossroads.
The Spectra Logic tape library does not have separate controller LUNs for the
third-party functionality. For this reason, the sgscan output lists the library as
a Changer rather than as a Mover.
For an Hitachi, HP, or Sun disk array, you must check the HBA binding to
obtain the SCSI target number for the arrays ECopy target port, and use that
SCSI target number to identify the correct /dev/sg path in the sgscan output.
An alternative: the mover.conf file can consist of one line specifying the device by
means of the /dev/rmt/device_file_name or /dev/rdsk/device_file_name, where
device_file_name specifies the actual file name of the tape or disk. Note that the tape
device must be the same as the device that NetBackup selects for the backup, and the
disk must be one that is involved in the backup.
Instead of the /dev/rmt/device_file_name or /dev/rdsk/device_file_name path,
you can use the TAPE or DISK keyword. For more information, refer to Keywords in
Mover File on page 70.
SCSI Reserve/Release
For backups that use the third-party copy device method, SCSI reserve/release may be
required to prevent unintended sharing of tape devices by multiple hosts on the SAN.
With SCSI reserve/release, either the media server or the third-party copy device acquires
exclusive control over the tape drive, thus preventing other jobs from accessing the tape
during the backup.
The bptm process logs all SCSI reserve/release commands. For background information
on SCSI reserve/release, refer to the NetBackup Shared Storage Option System Administrators
Guide.
Note
If your mover.conf file contains only /dev/rmt/device_path entries or the TAPE
keyword, SCSI reserve/release will be used for the backup. No further
configuration is needed for SCSI reserve/release.
SCSI reserve/release is configured by means of the mover.conf file. The type of entry to
make in mover.conf depends on the type of tape device and the network connection it is
using, as follows:
If the tape device is a Fibre Channel device (not connected behind a router or bridge)
and does not have third-party copy functionality:
Specify the passthru path of the third-party copy device followed by the
i=reserve_value. For example:
/dev/sg/c6t1l0 i=2000001086100d5e
or
TAPE
DISK
For a third-party copy backup, attempt to use a disk involved with the current backup if
that disk has third-party copy functionality or is behind (SCSI-connected to) a third-party
copy device. This allows better concurrent backup processing, so that two or more backup
jobs can execute simultaneously.
Note The disks SCSI passthru driver device path must be included in the mover.conf file.
TAPE
For a third-party copy backup, attempt to use the current tape device selected for the
backup if that device has third-party copy functionality or is behind (SCSI-connected to) a
third-party copy device. This has two advantages:
There is no need to specify a device path or passthru driver device path. Instead of
having to enter /dev/rmt/ paths for a number of tape devices, you can use the TAPE
keyword as shorthand for all of them.
Allows better concurrent backup processing, so that two or more backup jobs can
execute simultaneously.
END
Stop searching the mover.conf file for third-party copy devices for the current
third-party copy backup.
If there are two or more third-party copy devices in the mover.conf file, NetBackup tries
them sequentially, starting with the first one listed in the file, until one is found that can
successfully move the data. END means do not look further in the current mover file and
do not look in any other mover files, even if the last device tried was unsuccessful. Note
that if no successful device is found before END is reached, the backup fails.
The END keyword limits the search for a third-party copy device in a mover.conf file
that contains entries for more than one device. This can save you the trouble of deleting
device entries and re-entering them later.
For example, if the mover.conf file contains the following:
/dev/sg/c6t4l0
END
/dev/sg/c6t4l2
/dev/sg/c6t4l3
NetBackup will try to use device /dev/sg/c6t4l0 and will not try the other devices.
i=reserve_value
Use SCSI reserve/release for third-party reservation, if supported by the tape device or by
the third-party copy device to which the tape device is connected. The reserve_value is a
world-wide port name or fibre channel port identifier, as follows.
For the ADIC/Pathlight Gateway, the reserve_value is the world-wide port name of
the ADIC/Pathlight.
For devices made by other vendors, the reserve_value may be the fibre channel port
identifier (destination ID) of the third-party copy device, with two leading zeros. For
example, if the fibre channel port identifier is 231DE4, the reserve_value is 00231DE4.
Please contact the vendor of the device for specifications.
hr
Hold the tape reservation (SCSI reserve/release) when a third-party copy device that is
not a tape device is designated by means of a passthru device path (/dev/sg/ on Solaris
or /dev/sctl/ on HP). If you do not specify the hr keyword, the default is to drop or
omit the reservation.
dr
Omit the use of SCSI reserve/release when a tape device is designated by the TAPE
keyword or its tape device path (such as /dev/rmt/2cbn). If you do not specify the dr
keyword, the default is to hold the reservation.
For example:
/dev/rmt/2cbn
/dev/rmt/3cbn
TAPE dr
In this example, if neither of the specified /dev/rmt devices can use SCSI reserve/release,
NetBackup will try a tape device without the reserve.
to
If the third-party copy device needs additional time to respond to a backup request, you
can increase the timeout value by specifying to followed by the limit in seconds. The
default is 300 seconds (5 minutes). Additional time may be needed, for instance, if the
third-party copy device is running in debug mode.
The following example resets the timeout for third-party copy device /dev/rmt/2cbn to
600 seconds:
/dev/rmt/2cbn to 600
In this example, NetBackup will allow the third-party copy device (accessible through
/dev/rmt/2cbn) ten minutes to respond to a backup request. If the device does not
respond within 10 minutes, NetBackup will try the next third-party copy device listed in
the mover file. If no other devices are listed, the backup will fail.
/dev/sg/c6t1l0 hr
/dev/sg/c6t1l0
In this example, NetBackup will try to use the third-party copy device specified by
/dev/sg/c6t1l0 and will attempt to use reserve/release by means of the
i=reserve_value. If unsuccessful, NetBackup will try to use the same third-party copy
device and reserve/release by means of the hr keyword (hold the reserve). If unsuccessful,
NetBackup will use the third-party copy device without the reserve.
Note
The storage_unit_name in this file name must exactly match the name of the storage
unit as it appears in the Policy storage unit field of the Change Policy dialog.
1. mover.conf.policy_name
2. mover.conf.storage_unit_name
3. mover.conf
Note
You must create a 3pc.conf file if you are using the Third-Party Copy Device
backup method AND some of your devices do not support identification
descriptors (E4 target). Otherwise, you can skip to Create the mover.conf File on
page 77.
2.
If the media server does not have access to all disks (due to zoning or LUN-masking
issues), run the following command on the media server:
/usr/openv/netbackup/bin/bptpcinfo -x client_name
where client_name is the name of a NetBackup client on the fibre channel network
where the third-party copy device is located. The 3pc.conf file will be updated with
information about the disks on this network, allowing the media server to see those
disks. This information may have to be edited by adding the world-wide name
(wwn=) of each device, as explained in the next two steps.
Note that the entries added by the -x option do not include p=devpath. Instead, they
have c=client and P=clientpath. In the following example, lines 21 and 22 were
added by the -x option:
3.
If you have SANPoint Control, you can use it to add world-wide name and lun
information to the 3pc.conf file, by entering the following command on the media
server:
/usr/openv/netbackup/bin/admincmd/bpSALinfo -S SPC_server
4.
If you do not have SANPoint Control or it does not support your environment, edit
the 3pc.conf file as follows:
For each storage device listed in the 3pc.conf file, you may need to provide
world-wide port names, depending on what NetBackup was able to discover about
the device and what the third-party copy device supports.
These are the editing tasks:
In the 3pc.conf file, if each device that will be backed up with Advanced Client
has an identification descriptor (i=), and if the third-party copy device supports
the use of identification descriptors, the 3pc.conf file is complete. No editing is
needed; skip the rest of this section and continue with Create the mover.conf
File on page 77.
If the 3pc.conf file does not have an identification descriptor for each device (or
the descriptor is not supported by the third-party copy device), enter the
world-wide port name (w=) for each device. (Obtain the world-wide port name
from your Device Checklist on page 58.)
Note This is required for the Third-Party Copy Device method only.
1.
On the NetBackup media server, enter the following command:
/usr/openv/netbackup/bin/admincmd/bpmoverinfo
This creates the following file:
/usr/openv/volmgr/database/mover.conf
The bpmoverinfo command discovers any third-party copy devices available on the
SAN and lists them in the mover.conf file. Any tape drives with third-party copy
capability are listed first.
For a description of the bpmoverinfo command, refer to the NetBackup Commands for
UNIX guide.
Note
For bpmoverinfo to correctly list third-party copy devices in the mover.conf file,
the third-party copy devices must already have passthru paths defined. For an
example, see Solaris only: Example for sg.links, sg.conf, and st.conf files on
page 57.
2.
If you need to control the circumstances under which a third-party copy device is
used, create a separate mover.conf file for a policy or storage unit:
/usr/openv/volmgr/database/mover.conf.policy_name
or
/usr/openv/volmgr/database/mover.conf.storage_unit_name
For information on these naming formats and possible mover file entries, refer to
mover.conf Description on page 65 and Naming the Mover File on page 73.
Following are example storage environments and mover.conf files.
Example mover.conf file for a site with one third-party copy device
Third-party
copy device
/dev/sg/c6t1l0
robot0 robot1
In the above example, backups will use third-party copy device /dev/sg/c6t1l0
specified in the mover.conf file. The backup uses the storage unit (TLD_robot0 or
TLD_robot1) specified for the policy on the Change Policy dialog.
See the next figure for an example configuration involving a disk array with third-party
copy device capability.
Example mover.conf.policy_name file for site with third-party copy capability in disk array
In this example, policy array_1 is configured to back up the client data contained on the
disk array. The backup uses storage unit TLD_robot0 to store the data.
All backups configured in this policy will use the disk array as the third-party copy
device. The mover.conf.array_1 file specifies that array.
Note The client data must reside in the array that is used as the third-party copy device.
See the next figure for an example configuration with two third-party copy devices, where
both devices can use the same robot.
Client disks
Third-party Third-party
on SAN
copy device-1 copy device-2
/dev/sg/c6t1l0 /dev/sg/c6t4l0
robot0 robot1
The above example shows two robots (robot0 and robot1). Robot0 has been assigned
two storage unit names, TLD_robot0 and TLD_robot00. Robot1 has been assigned
one storage unit name, TLD_robot1.
The above example also shows two third-party copy devices, device-1 with a SCSI
passthru device path of /dev/sg/c6t1l0, and device-2 with a SCSI passthru device
path of /dev/sg/c6t4l0.
To allow third-party copy device-1 to use robot0, create a file named
mover.conf.TLD_robot0. In the file, include the device path of device-1
(/dev/sg/c6t1l0).
To allow third-party copy device-2 to use the same robot (robot0), create a file
named mover.conf.TLD_robot00. In the file, include the device path of
device-2 (/dev/sg/c6t4l0). Notice that the file name must refer to a different
storage unit, TLD_robot00, which is assigned to robot0.
To allow third-party copy device-2 to use robot1, create a file named
mover.conf.TLD_robot1 that includes the device path of device-2
(/dev/sg/c6t4l0).
Note
The storage_unit_name portion of the mover.conf.storage_unit_name file
name must exactly match the actual name of the storage unit. See under
Configuring an Advanced Client Policy on page 85 for an example Change Policy
dialog showing a storage unit name in the Policy storage unit field.
Configuration Tips
83
Notes and Prerequisites
Note If you choose the Perform block level incremental backups option on the policy
attributes tab, the other features of Advanced Client are not available and are
grayed out.
2. Click on Policies.
3. In the All Policies pane, double click on the name of the policy (or right-click to create
a new one).
6. Optional: To select the snapshot method manually, refer to Selecting the Snapshot
Method on page 90. Skip this step if you want NetBackup to select the snapshot
method for you. Automatic Snapshot Selection on page 89 describes how
NetBackup chooses a snapshot method.
7. To create a backup that enables Instant Recovery, select the Retain snapshots for
instant recovery attribute.
This attribute is required for the following types of restore: block-level restore, file
promotion, and image rollback. These are described under Instant Recovery Restore
on page 184. For help in creating a policy for instant recovery backups, refer to
Instant Recovery Configuration on page 117.
8. To reduce the processing load on the client, select Perform offhost backup.
This option may require additional configuration.
a. For a backup performed by an alternate client, select Use alternate client and
enter the name of the alternate client. Refer to Configuring Alternate Client
Backup on page 96 for more info.
b. For a backup performed by a data mover (not by a client), select Use data mover
and select the method:
NetBackup Media Server
Backup processing will be handled by a Solaris or HP NetBackup media server.
Third-Party Copy Device
Backup processing will be handled by a third-party copy device.
Network Attached Storage
Backup processing will be handled by an NDMP host (NAS filer), with the
NAS_Snapshot method. NetBackup for NDMP software is required. For help
configuring a policy for Network Attached Storage and NAS_Snapshot, refer to
the NAS Snapshot Configuration chapter.
c. Specify a policy storage unit or group of storage units in the Policy storage unit
pull-down menu.
Note Any_available is not supported for the following data mover types: NetBackup
Media Server and Third-Party Copy Device. Disk storage units are not supported
for Third-Party Copy Device.
Instead of using a particular storage unit, you can create a storage unit group that
designates devices configured on the SAN. Storage unit groups are described in
the NetBackup Media Manager System Administrators Guide.
10. Use the Schedules tab to define a schedule, the Backup Selections tab to specify the
files to be backed up, and the Clients tab to specify the clients.
Please note the following:
Only one snapshot method can be configured per policy. If, for instance, you want
to select one snapshot method for clients a, b, and c, and a different method for
clients d, e, and f, create two policies for each group of clients and select one
method for each policy.
Advanced Client policies do not support the ALL_LOCAL_DRIVES entry in the
Backup Selections list.
If you use the Backup Policy Configuration wizard, see Backup Policy
Configuration Wizard on page 100.
For snapshot backups, the maximum pathname length is 1000 characters (as
opposed to 1023 characters for backups that do not use a snapshot method). Refer
to Maximum Pathname Length on page 99 for more information on this
restriction. The NetBackup System Administrators Guide describes other file-path
rules.
11. Click Close when done with Schedules and Clients tabs.
Advanced Client validation begins. An error message may be displayed, such as the
following:
The Details pane explains the problem. You can click No, resolve the problem, and
close the policy again, or click Yes to override the message.
Use of the auto method does not guarantee that NetBackup can select a snapshot method
for the backup. NetBackup looks for a suitable method based on a number of factors:
The client platform and policy type.
The presence of up-to-date software licenses, such as VERITAS VxFS and VxVM.
How the client data is configured. For instance, whether a raw partition has been
specified for a copy-on-write cache (refer to How to Enter the Cache on page 136),
or whether the clients data is contained in VxVM volumes configured with one or
more snapshot mirrors.
NetBackup uses the first suitable method found.
For greater control, refer to Selecting the Snapshot Method on page 90.
2. Click on Policies. In the All Policies pane, double click on the name of the policy. The
Change Policy dialog appears.
3. Make sure that Perform snapshot backups is selected (for more information on this,
refer to the previous procedure).
Note Only one snapshot method can be configured per policy. If you want to select one
snapshot method for clients a, b, and c, and a different method for clients d, e, and f,
create two policies for each group of clients and select one method for each policy.
auto
NetBackup will select a snapshot method when the backup starts. See Automatic
Snapshot Selection on page 89.
BusinessCopy
For mirror snapshots with Hewlett Packard XP series disk arrays with BusinessCopy
Services. For clients on Solaris or HP.
FlashSnap
For mirror snapshots on alternate clients, with the VxVM FlashSnap feature. UNIX
clients must be at VxVM 3.2 or later; Windows clients must be at VxVM 3.1 or later,
with all the latest VxVM service packs and updates.
FlashSnap is based on the VxVM disk group split and join technology.
NAS_Snapshot
For snapshots of client data residing on an NDMP host. Requires NetBackup for
NDMP software. For help configuring a policy for NAS_Snapshot, refer to the NAS
Snapshot Configuration chapter.
nbu_snap
For copy-on-write snapshots of UFS or VERITAS VxFS file systems. For Solaris clients
only. nbu_snap is not supported in clustered file systems.
nbu_snap requires a designated cache; seeHow to Enter the Cache on page 136.
ShadowImage
For mirror snapshots with Hitachi Data Systems disk arrays with ShadowImage
(HOMRCF). For clients on Solaris or HP.
TimeFinder
For mirror snapshots with EMC Symmetrix Disk Arrays (with TimeFinder SYMAPI).
For clients on Solaris or HP.
VSP
(VERITAS Volume Snapshot Provider:) for snapshots of open and active files. For
clients on Windows 2000 or 2003 only.
Note You can use VSP without Advanced Client, as explained in the NetBackup System
Administrators Guide for Windows, Volume I. In some cases, however, such as when
the Busy File Timeout has expired, no snapshot is created and the backup job may
continue without backing up the busy file. If you use VSP with Advanced Client,
the backup will either successfully create a snapshot of all files, or the backup job
will fail with a status code 11.
VSS
For snapshots using the Shadow Copy Service of Windows 2003. For clients on
Windows 2003 only. The note under VSP also applies to VSS.
For configuration assistance, please refer to your Microsoft documentation.
VVR
For alternate client backups of a replicated VxVM volume. Requires VxVM 3.2 or later
with the VERITAS Volume Replicator license. For clients on Solaris or HP.
VxFS_Checkpoint
For copy-on-write snapshots (Solaris or HP). This method is not supported by the
FlashBackup policy type.
Requires the Storage Checkpoint feature of VxFS 3.4 or later. For HP, VxFS 3.5 or later
is required.
Note Note that VxFS_Checkpoint requires the NetBackup Advanced Client license and
the VERITAS File System license with the Storage Checkpoints feature. Without
both licenses, the copy-on-write snapshot (Storage Checkpoint) cannot be opened
and the backup fails.
VxFS_Snapshot
For snapshots of Solaris or HP clients on the local host (not offhost), for FlashBackup
policies only. This method requires VxFS 3.4 (Solaris) or 3.3 (HP) or later. This method
also requires a designated cache; see VxFS_Snapshot on page 140 for details. Note
that all files in the Backup Selections list must reside in the same file system.
vxvm
For any of the following types of snapshots with data configured over Volume
Manager volumes (clients on Solaris, HP, or Windows).
For third-mirror snapshots (VxVM 3.1 or later).
For full-sized instant snapshots (VxVM 4.0).
For space-optimized instant snapshots (VxVM 4.0).
Note For further notes relating to these methods, refer to the Snapshot Configuration
Notes chapter.
Cache device path: specify a raw partition for the cache by entering the cache
partitions full path name in the Value field. For example:
/dev/rdsk/c2t0d3s3
This setting overrides the cache specified on Host Properties > Clients > Client
Properties dialog > UNIX Client > Client Settings (see How to Enter the Cache on
page 136).
Do not specify wildcards (such as /dev/rdsk/c2*). See Cache device on page 134
for a complete list of requirements.
Caution The cache partitions contents will be overwritten by the nbu_snap process.
Note If the client is rebooted, snapshots that have been kept must be remounted before
they can be used for a restore. You can use the bpfis command to discover the
images (refer to the bpfis man page or the NetBackup Commands manual). This does
not apply to snapshots for Instant Recovery: NetBackup automatically remounts
them as needed.
Note If the snapshot is made on an EMC, Hitachi, or HP disk array, and you want to use
hardware-level restore, read the Caution under Hardware-Level Disk Restore on
page 172.
Caution If you specify a number that is smaller than the existing number of snapshots,
NetBackup deletes the older snapshots until the number of snapshots equals
that specified for Maximum Snapshots.
2. Click on Policies. In the All Policies pane, double click on the name of the policy. The
Change Policy dialog appears.
4. Add the following directive to the start of the Backup Selections list:
METHOD=USER_DEFINED
DB_END_BACKUP_CMD=your_end_script_path
In the above example, the script shutdown_db.ksh is run before the backup, and
restart_db.ksh is run after the snapshot is created.
Basic Requirements
Before configuring a policy for alternate client backup, make sure the following have been
done:
For the FlashSnap and VVR snapshot methods, VxVM 3.2 or later (for UNIX) or
VxVM 3.1 or later (Windows) must be installed and volumes configured over the
primary hosts disks. The VxVM FlashSnap or VVR license must also be installed.
The user and group identification numbers (UIDs and GIDs) associated with the files
to be backed up must be available to both hosts (the primary client and the alternate
backup client).
The primary and alternate clients must be running the same operating system,
volume manager, and file system. For each of these I/O system components, the
alternate client must be at the same level as the primary client, or higher level.
Following are the supported configurations:
HP 11i HP 11i
VxFS 3.4 or later (VxFS 3.3 for HP) VxFS, at same level as primary client or higher
VxVM 3.2 or later (UNIX) VxVM, at same level as primary client or higher.
VxVM 3.1 or later1 (Windows) Note: for VVR method, the alternate client must
be at exactly the same level as primary client.
1. For VxVM on Windows, use VxVM 3.1 or later with all the latest VxVM service packs
and updates.
Policy type:
Choose Standard,
FlashBackup,
FlashBackup-Windows,
MS-Windows-NT,
MS-Exchange-Server,
MS-SQL-Server, DB2, or
Oracle
Snapshot method:
You can select the auto
method, or the following:
FlashSnap (for a disk
group split configuration,
with VxVM 3.2 or later
with the FlashSnap
feature).
VVR (for a UNIX
replication host; requires
VxVM 3.2 or later with
VVR feature).
TimeFinder,
ShadowImage,
BusinessCopy (the
array-specific methods, UNIX only).
Example configurations
3. Client data is on a JBOD array in VxVM volumes with snapshot mirrors configured
To run the backup on the alternate client, choose Standard (for UNIX client) or
MS-Windows-NT (Windows client) as the policy type, select Perform snapshot backups,
Perform offhost backup, Use alternate client, and select the alternate client. On the
Snapshot Options display, specify the FlashSnap method.
If the client data consists of many files, or if you need the ability to do individual file
restore from raw partition backups, select FlashBackup or FlashBackup-Windows as the
policy type.
Note Other combinations of policy type and snapshot method are possible, depending on
many factors, such as your hardware configuration, file system and volume
manager configuration, and installed NetBackup product licenses.
Configuration Tips
Snapshot Tips
In the Backup Selections list, be sure to specify absolute path names. Refer to the
NetBackup System Administrators Guide for help specifying files in the Backup
Selections list.
If an entry in the Backup Selections list is a symbolic (soft) link to another file,
Advanced Client backs up the link, not the file to which the link points. This is
standard NetBackup behavior. To back up the actual data, include the file path to the
actual data in the Backup Selections list.
On the other hand, a raw partition can be specified in its usual symbolic-link form
(such as /dev/rdsk/c0t1d0s1): do not specify the actual device name that
/dev/rdsk/c0t1d0s1 is pointing to. For raw partitions, Advanced Client
automatically resolves the symbolic link to the actual device.
The Cross mount points policy attribute is not available for policies that are
configured for snapshots. This means that NetBackup will not cross file system
boundaries during a backup of a snapshot. A backup of a high-level file system, such
as / (root), will not back up files residing in lower-level file systems unless those file
systems are also specified as separate entries in the Backup Selections list. To back up
/usr and /var, for instance, both /usr and /var must be included as separate
entries in the Backup Selections list.
On Windows, the \ must be entered in the Backup Selections list after the drive letter
(for example, C:\). For the correct format when using a FlashBackup-Windows policy,
see step 10 on page 105.
For more information on Cross mount points, refer to the NetBackup System
Administrators Guide.
For backups, make sure the following are set to allow the number of active streams to be
equal to or greater than the number of streams in the Backup Selections list:
Policy attribute: Limit jobs per policy
Schedule setting: Media multiplexing
Storage unit setting: Maximum multiplexing per drive
System configuration setting: Maximum jobs per client
This chapter describes FlashBackup and explains how to configure FlashBackup policies.
FlashBackup Capabilities
Restrictions
101
FlashBackup Capabilities
FlashBackup Capabilities
FlashBackup is a policy type that combines the speed of raw-partition backups with the
ability to restore individual files. The features that distinguish FlashBackup from other
raw-partition backups and standard file system backups are these:
Increases backup performance over standard file-ordered backup methods. For
example, a FlashBackup of a file system completes in substantially less time than
other types of backup, if the file system contains a very large number of files and most
of the file system blocks are allocated.
Individual files can be restored from raw-partition backups.
Backs up the following file systems: VxFS (Solaris and HP), ufs (Solaris), Online JFS
(HP), and NTFS (Windows 2000/2003).
Supports multiple data streams, to further increase the performance of raw-partition
backups when there are multiple devices in the Backup Selections list.
Restrictions
FlashBackup policies do not support file systems managed by HSM.
FlashBackup does not support VxFS storage checkpoints used by the
VxFS_Checkpoint snapshot method.
FlashBackup supports the following I/O system components: ufs, VxFS, and
Windows NTFS file systems, VxVM volumes and LVM volumes, and raw disks. Other
components (such as non-VERITAS volume managers or storage replicators) are not
supported.
Note these restrictions for Windows clients:
FlashBackup-Windows policies do not support the backup of Windows
system-protected files (the System State, such as the Registry and Active
Directory).
FlashBackup-Windows policies do not support the backup of Windows OS
partitions that contain the Windows system files (usually C:).
FlashBackup-Windows policies do not support the backup of Windows System
database files (such as RSM Database and Terminal Services Database).
FlashBackup-Windows policies do not support exceptions to client exclude lists.
2. Click on Policies. Double click on the policy (or right-click to create a new one).
Select FlashBackup or
FlashBackup-Windows
Click Perform
snapshot backups.
Pre-selected for
Windows, optional for
UNIX.
3. Select the Policy type: FlashBackup for UNIX clients, or FlashBackup-Windows for
Windows clients.
These policy types allow you to restore individual files from the raw-partition
backup.
FlashBackup and FlashBackup-Windows policies support both tape storage units and
disk storage units.
Note The partition must exist on all clients included in the policy.
7. To shift the backup I/O to an alternate client (for UNIX or Windows clients), or to a
NetBackup media server or third-party copy device (for UNIX clients only), select
Perform offhost backup. See step 8 on page 86 for more instructions on this option.
For FlashBackup, the Use data mover option is supported for UNIX clients only.
8. To reduce backup time when more than one raw partition is specified in the Backup
Selections list, select Allow multiple data streams.
Note For FlashBackup and FlashBackup-Windows policies, a full backup backs up all
blocks in the disk or raw partition selected in the Backup Selections tab (see next
step). An incremental backup backs up only the blocks associated with the files that
were changed since the last full backup.
10. Use the Backup Selections tab to specify the drive letter or mounted volume
(Windows) or the raw disk partition (UNIX) containing the files to back up.
Windows examples:
\\.\E:
\\.\E:\mounted_volume
Note The drive must be designated exactly as shown above (E:\ is not correct). Backing
up the drive containing the Windows system files (usually the C drive) is not
supported.
Solaris examples:
/dev/rdsk/c1t0d0s6
/dev/vx/rdsk/volgrp1/vol1
HP-UX examples:
/dev/rdsk/c1t0d0
/dev/vx/rdsk/volgrp1/vol1
Note On UNIX: The Backup Selections tab must specify the raw (character) device
corresponding to the block device over which the file system is mounted. For
example, to back up /usr, mounted on /dev/dsk/c1t0d0s6, enter raw device
/dev/rdsk/c1t0d0s6. Note the r in /rdsk.
Note Advanced Client policies do not support the ALL_LOCAL_DRIVES entry in the
Backup Selections list.
Note CACHE entries are allowed only when the policys Perform snapshot backups
option is unselected. If Perform snapshot backups is selected, NetBackup will
attempt to back up the CACHE entry and the backup will fail.
On the policys Backup Selections tab, specify at least one cache device by means of
the CACHE directive. For example:
CACHE=/dev/rdsk/c0t0d0s1
This is the cache partition for storing any blocks that change in the source data while
the backup is in progress. CACHE= must precede the source data entry. See following
example.
Please note:
Specify the raw device, such as /dev/rdsk/c1t0d0s6. Do not specify the block
device, such as /dev/dsk/c1t0d0s6.
Also, do not specify the actual device file name. The following, for example, is not
allowed:
/devices/pci@1f,0/pci@1/scsi@3/sd@1,0:d,raw
where x is an integer.
When using multiple data streams, you can include multiple entries in the Backup
Selections list.
For example:
CACHE=/dev/rdsk/c1t4d0s0
/dev/rdsk/c1t4d0s7
CACHE=/dev/rdsk/c1t4d0s1
/dev/rdsk/c1t4d0s3
/dev/rdsk/c1t4d0s4
Note Only one data stream is created for each physical device on the client. You cannot
include the same partition more than once in the Backup Selections list.
The directives that you can use in the Backup Selections list for a FlashBackup policy are:
NEW_STREAM
UNSET_ALL
Each backup begins as a single stream of data. The start of the Backup Selections list up to
the first NEW_STREAM directive (if any) is the first stream. Each NEW_STREAM entry causes
NetBackup to create an additional stream or backup.
Note that all file paths listed between NEW_STREAM directives are in the same stream.
The Backup Selections list in the following example generates four backups:
/dev/rdsk/c1t0d0s6
/dev/vol_grp/rvol1
2 NEW_STREAM
2 NEW_STREAM
/dev/rdsk/c1t1d0s1
UNSET CACHE
3 NEW_STREAM
CACHE=/dev/cache_group/rvol2c
UNSET CACHE
/dev/vol_grp/rvol2
CACHE=/dev/rdsk/c1t3d0s4
3 NEW_STREAM
/dev/rdsk/c1t2d0s5
UNSET CACHE
/dev/rdsk/c1t5d0s0
CACHE=/dev/cache_group/rvol3c
4 NEW_STREAM
/dev/vol_grp/rvol3
UNSET CACHE
/dev/vol_grp/rvol3a
CACHE=/dev/rdsk/c0t2d0s3
4 NEW_STREAM
/dev/rdsk/c1t6d0s1
UNSET CACHE
CACHE=/dev/cache_group/rvol4c
/dev/vol_grp/rvol4
111
NAS Snapshot Overview
LAN / WAN
NetBackup client,
with Advanced NAS host
Client
CIFS or NFS mount
NetBackup creates snapshots on the NAS-attached disk only, not on storage devices
attached to the NetBackup server or the client.
1. Start the NetBackup Administration Console on the NetBackup for NDMP server as
follows:
On Windows NT/2000/2003: from the Windows Start menu, select Programs,
VERITAS NetBackup, NetBackup Administration Console.
On UNIX servers, enter the following:
/usr/openv/netbackup/bin/jnbSA &
3. For Policy type, select Standard for UNIX clients, MS-Windows-NT for Windows
clients, or Oracle for UNIX clients configured in an Oracle database.
Note The NDMP policy type is not supported for snapshots in this 5.1 release.
Note Although the policy cannot execute without a specified storage unit, NetBackup
does not use the storage unit. The snapshot is created on disk regardless of which
storage unit you select.
For Oracle policies, the policy uses the storage unit you specify, but only for backing
up archive logs and control files.
5. Select Perform snapshot backups and Retain snapshots for instant recovery.
7. From the pull-down under Use data mover, pick Network Attached Storage.
When the policy executes, NetBackup will automatically select the NAS_Snapshot
method for creating the snapshot.
As an alternative, you can manually select the NAS_Snapshot method using the
Advanced Snapshot Options dialog from the policy display. For the Maximum
Snapshots (Instant Recovery only) parameter, refer to the Policy Configuration
chapter of this guide, Selecting the Snapshot Method.
8. On the Schedule Attributes tab, select Instant recovery backups to disk only.
9. For the Backup Selections list, specify the directories or files from the client perspective,
not from the NDMP host perspective. For example:
On a UNIX client, if the data resides in /vol/vol1 on the NDMP host nas1, and
is NFS mounted to /mnt2/home on the UNIX client, specify /mnt2/home in the
policy Backup Selections list.
On a Windows client, if the data resides in /vol/vol1 on the NDMP host nas1,
and is CIFS mounted (mapped to) \\nas1\vol\vol1 on the Windows client,
specify \\nas1\vol\vol1 in the policy Backup Selections list.
The client data must reside on a NAS host and be mounted on the client by means of
NFS on UNIX or CIFS on Windows. For NFS mounts, the data must be manually
mounted by means of the mount command, not auto-mounted.
All paths for a given client in the policy must be valid, or the backup will fail.
The ALL_LOCAL_DRIVES entry is not allowed in the Backup Selections list.
This chapter explains how to prepare for and configure a policy that uses the Instant
Recovery feature.
The following topics are covered in this chapter:
Instant Recovery Capabilities
Requirements
Restrictions
Instant Recovery Overview
Configuring a Policy for Instant Recovery
Configuring VxVM
Instant Recovery for Databases
117
Instant Recovery Capabilities
Requirements
For snapshots using Storage Checkpoints (using NetBackups VxFS_Checkpoint
method), all clients must have VxFS 3.4 or later with the Storage Checkpoint feature.
For VxVM snapshot volumes on UNIX, clients must have VxVM 3.2 or later with the
FastResync feature. Windows clients must have Storage Foundations for Windows
version 3.1.
For Instant Recovery with DB2, Oracle, Exchange, or SQL-Server databases, refer to
the appropriate NetBackup database agent guide.
For replication hosts (using NetBackups VVR method), clients must have VxVM 3.2
or later with the VERITAS Volume Replicator feature.
Restrictions
For snapshots using Storage Checkpoints, Instant Recovery supports file systems with
the Version 4 disk layout or later. Older disk layouts must be upgraded to Version 4 or
later.
No-data Storage Checkpoints (those containing file system metadata only) are not
supported.
Instant Recovery snapshots must not be manually removed or renamed, otherwise the
data cannot be restored.
Block-level restore is available only when restoring files to the original location on the
original client and when the snapshot method used in the backup was
VxFS_Checkpoint. This feature requires the VxFS File System.
Alternate client backup is supported in the split-mirror configuration only, using a
mirror-type snapshot method (vxvm or VVR).
For Instant Recovery backups of data configured on VxVM volumes on Windows, the
VxVM volume names must be 12 characters or less. Otherwise, the backup will fail.
NetBackup master
server with Instant Snapshots on same
2003
snapshot B
client data 2003
restore 2003
In the figure above, NetBackup Instant Recovery creates snapshot A of the client data on
disk. One hour later, as scheduled, NetBackup creates snapshot B, also on disk, followed
one hour later when it creates snapshot C. When needed, a user can restore data directly
from disk, from the appropriate snapshot.
Note NetBackup Instant Recovery keeps the snapshot. The snapshot can be used for
restore even if the client has been rebooted.
For more detail on Storage Checkpoints, refer to the VERITAS File System Administrators
Guide. For an introduction to the copy-on-write process, refer to How Copy-on-Write
Works on page 249.
A Storage Checkpoint has the following features:
Persists after a system reboot or failure.
Identifies the blocks that have changed since the last Storage Checkpoint.
Shares the same pool of free space as the file system. The number of checkpoints is
limited only by the available disk space.
Supports mounting a VxFS 4.0 file system over a VxVM 4.0 volume set (multi-device
system).
2. Click Policies. In the All Policies pane, open a policy or create a new one.
Click here.
3. For the policy type, select Standard, MS-Windows-NT, or the database agent type
appropriate for the client(s).
8. Click the Advanced Snapshot Options button to select the snapshot method.
If this is a new policy, you can skip this step to let NetBackup select the method (auto
is the default).
a. Select a snapshot method from the pull-down list. For creating an Instant
Recovery snapshot, the available methods are:
auto (UNIX or Windows) - NetBackup selects the snapshot method.
NAS_Snapshot (UNIX or Windows) - uses the NDMP V4 snapshot extension
to create the snapshot on the NAS-attached disk. Refer to the NAS Snapshot
Configuration chapter for help in setting up a policy for NAS_Snapshot.
VxFS_Checkpoint (UNIX only) - uses VxFS Storage Checkpoint to create the
snapshot.
b. Change parameter values for the method, if needed. The parameters are
described under step 6 on page 92.
10. Use the Backup Selections tab to enter the files and folders to be backed up.
If backing up Oracle database clients, refer to the NetBackup for Oracle System
Administrators Guide for instructions.
Advanced Client policies do not support the ALL_LOCAL_DRIVES entry in the
policys Backup Selections list.
If you use the Backup Policy Configuration wizard, see Backup Policy
Configuration Wizard on page 100.
11. Use the Clients tab to specify clients to be backed up by this policy.
In the above example, the next Instant Recovery backup will overwrite the mirror
snapshot that was made at 12:00 noon.
Before a mirror can be used for creating a backup for Instant Recovery, it must be
initialized (see Configuring VxVM on page 126).
Configuring VxVM
Note For Instant Recovery backups of data configured on VxVM volumes on Windows,
the VxVM volume names must be 12 characters or less. Otherwise, the backup will
fail.
Before using an Instant Recovery policy for backing up VxVM volumes, one or more
mirrors must be created. The primary volumes must be enabled for FastResync. Note that
on Windows, FastResync is enabled by default.
Windows
There are two ways to create the snapshot mirror:
This shows information for the specified disk group, including the names of the
volumes configured for that group. Create the snapshot by entering the following:
vxassist snapstart \Device\HarddiskDmVolumes\disk_group\Volume_name
UNIX
Wait until the mirror is synchronized (status SNAPDONE, or State field reads Ready
in the volumes properties display).
layout=layout init=active
cachevolname=cache_volume
[fastresync=on]
Where:
Brackets [ ] indicate optional items.
make volume specifies the name of the volume snapshot.
The number for nmirror should equal the number for ndcomirror.
3. Set the Maximum Snapshots (Instant Recovery only) value on the Advanced
Snapshot Options dialog.
a. In the VEA console, right-click on the volume and click Properties from the
pop-up menu.
The FastResync field states whether or not FastResync is enabled.
b. Click Cancel.
a. Create a new mirror by right-clicking on the volume and selecting Snap > Snap
Start.
b. Make sure FastResync is enabled on the mirror. Click OK to create the mirror and
start full synchronization.
3. On the Mirrors tab, ensure that synchronization has completed as indicated by Snap
Ready in the Status field.
133
nbu_snap
nbu_snap
The nbu_snap snapshot method is for Solaris clients only. It is for making
copy-on-write snapshots for UFS or VERITAS VxFS file systems.
The information in this section applies to either Standard or FlashBackup policy
types.
Note nbu_snap is not supported in clustered file systems, either as the selected snapshot
method, or as the default snapctl driver when configuring FlashBackup in the
earlier manner.
Cache device
The cache device is a raw disk partition: either a logical volume or physical disk. This
is used for storing the portions of the clients data that are changed by incoming write
requests while the copy-on-write is in progress.
Specify the actual character special device file (such as /dev/rdsk/c2t0d0s4).
nbu_snap will not work for block special device files.
Enter the full path name of the raw partition. Do not specify wildcards (such as
/dev/rdsk/c2*) as paths.
For the cache device, do not specify an active partition containing valuable data. Any
data in that partition will be lost when the nbu_snap process is complete.
The cache partition must be unmounted.
The cache partition must reside on the same host as the snapshot source (the clients
data to back up).
The partition must have enough space to hold all the writes to the partition that may
occur during the backup. Note that backups during non-working hours normally
require a smaller cache than a backup during peak activity. (See Sizing the Cache
Partition on page 135 for more suggestions on cache size.)
For the Media Server or Third-Party Copy method, the host containing the snapshot
source and cache must be visible to the media server or third-party copy device (refer
to the chapter titled SAN Configuration for Advanced Client).
For the Media Server or Third-Party Copy method, the disk containing the cache must
meet the requirements spelled out under Disk Requirements for Media Server/
Third-Party Copy on page 178.
1. Consider the period in which the backup will occur: the more user activity expected,
the larger the cache required.
You should execute the following procedure at an appropriate period, when your
nbu_snap backups typically run. If user activity at your site is known to vary with the
time of day, a different time could bring very different results.
2. Make sure a raw partition is available on a separate disk (see Cache device on
page 134 for cache partition requirements).
3. During the appropriate backup period, create an nbu_snap snapshot by entering the
following as root:
/usr/openv/netbackup/bin/driver/snapon snapshot_source cache
where snapshot_source is the partition on which the clients file system is mounted, and
cache is the raw partition to be used as copy-on-write cache. For example:
/usr/openv/netbackup/bin/driver/snapon /omo_cat3
/dev/vx/rdsk/zeb/cache
Example output:
matched /omo_cat3 to mnttab entry /omo_cat3
5. If the cache partition is not large enough, the backup will fail with status code 13, file
read failed. The /var/adm/messages log may contains errors such as the
following:
Mar 24 01:35:58 bison unix: WARNING: sn_alloccache: cache
unusable
7. When finished with the snapshot, you can remove it by entering the following:
/usr/openv/netbackup/bin/driver/snapoff snapid
where snapid is the numeric id of the snapshot created at step 3.
Note A snapshot created manually with the snapon command is not controlled by a
NetBackup policy. When run manually, snapon creates a copy-on-write snapshot
only. The snapshot will remain on the client until it is removed by entering
snapoff or the client is rebooted.
a. Use the Host Properties > Clients > Client Properties dialog > UNIX Client >
Client Settings to specify the raw partition in the Default cache device path for
snapshots field. This setting applies to the client in all policies.
b. Use the Advanced Snapshot Options dialog, Cache device path Value field. This
cache setting applies to all clients in the current policy, and overrides the cache
setting in the Client Properties dialog.
If you want NetBackup to be able to select the nbu_snap or VxFS_Snapshot methods
by means of the auto method, specify the cache on the Host Properties > Clients >
Client Properties dialog > UNIX Client > Client Settings as described above.
In a FlashBackup policy: if Perform snapshot backups is NOT selected, you must use
a CACHE= directive in the Backup Selections tab. This cache setting applies to all
clients in the current policy, and overrides the cache setting in the Host Properties
dialog. (This means of configuring cache will be discontinued in a future release.)
VxFS_Checkpoint
The VxFS_Checkpoint snapshot method is for making copy-on-write snapshots (Solaris or
HP only). This is one of several snapshot methods that support Instant Recovery backups.
Note that for VxFS_Checkpoint, the Instant Recovery snapshot is made on the same disk
file system that contains the clients original data.
For VxFS_Checkpoint, VxFS 3.4 or later with the Storage Checkpoints feature must be
installed on the NetBackup clients (HP requires VxFS 3.5).
The VxFS_Checkpoint method is not supported for backing up raw partitions
(whether FlashBackup or Standard policies).
Make sure there is enough disk space available for the checkpoint. The file system
containing the snapshot source should have at least 10% free space in order to
successfully implement the checkpoint.
Note Offhost backup is not supported for a VxFS 4.0 multi-device system.
Block-Level Restore
If only a small portion of a file system or database changes on a daily basis, full restores
are unnecessary. The VxFS Storage Checkpoint mechanism keeps track of data blocks
modified since the last checkpoint was taken. Block-level restores take advantage of this
by restoring only changed blocks, not the entire file or database. This leads to faster
restores when recovering large files.
Refer to Instant Recovery: Block-Level Restore (UNIX Clients Only) on page 185 for
instructions on setting up this feature.
VxFS_Snapshot
The VxFS_Snapshot method is for making copy-on-write snapshots of local Solaris or
HP-UX clients. Offhost backup is not supported with this snapshot method.
Note In a FlashBackup policy, if the Backup Selections list is configured with CACHE=
entries (see Configuring FlashBackup in the Earlier Manner (UNIX only) on
page 106), FlashBackup does support the backup of multiple file systems from a
single policy. For each file system, a separate cache must be designated with the
CACHE= entry.
vxvm
The vxvm snapshot method is for making mirror snapshots with VERITAS Volume
Manager 3.1 or later snapshot mirrors. (On Windows, make sure that VxVM has the
latest VxVM service packs and updates.)
The vxvm snapshot method works for any file system mounted on a VxVM volume.
However, before the backup is performed, the data must be configured with a VxVM
3.1 or later snapshot mirror or a VxVM 4.0 or later cache object (otherwise, the backup
will fail).
For help configuring a snapshot mirror, refer to Creating a Snapshot Mirror of
the Source, below, or refer to your VERITAS Volume Manager documentation.
For help configuring a cache object, refer to your VERITAS Volume Manager
documentation, and to VxVM Instant Snapshots on page 142.
For Instant Recovery backups of data configured on VxVM volumes on Windows,
the VxVM volume names must be 12 characters or less. Otherwise, the backup
will fail.
Note Since VxVM does not support fast mirror resynchronization on RAID 5 volumes,
the vxvm snapshot method must not be used with VxVM volumes configured as
RAID 5. If the vxvm snapshot method is selected for a RAID 5 volume, the backup
will fail.
Note If the Media Server or Third-Party Copy method is used, the disks that make up the
disk group must meet the requirements spelled out under Disk Requirements for
Media Server/ Third-Party Copy on page 178.
FlashSnap
FlashSnap uses the Persistent FastResync and Disk Group Split and Join features of
VERITAS Volume Manager (VxVM).
The FlashSnap snapshot method can be used for alternate client backups only, in the split
mirror configuration, which is described under Split mirror on page 11.
Note FlashSnap supports VxVM full-sized instant snapshots but not space-optimized
snapshots. For more information, refer to VxVM Instant Snapshots on page 142.
On UNIX
The following steps are described in more detail in the VERITAS FlashSnap Point-In-Time Copy
Solutions Administrators Guide.
1. Deporting a disk group means disabling access to that disk group. See the Volume Manager
Administrators Guide for more information on deporting disk groups.
e. Move the disks containing the snapshot volume to a separate (split) disk group:
vxdg split diskgroup split_diskgroup snap_volume
If the volume has not been properly configured, you may see an error similar to
the following:
host-name# vxdg split lhdvvr lhdvvr_split SNAP-emc_concat
subdisks on it
Look again at the layout of the disks and the volumes assigned to them, and
reassign the unwanted volumes to other disks as needed. Consult the VERITAS
FlashSnap Point-In-Time Copy Solutions Administrators Guide for examples of disk groups
that can and cannot be split.
a. Import the disk group that was deported from the primary:
vxdg import split_diskgroup
Note After doing this test, you must re-establish the original configuration to what it was
prior to testing the volumes. 1, deport the disk group on the alternate client, then 2,
import the disk group on the primary client, and 3, recover and join the original
volume group. For directions, refer to For FlashSnap (Solaris and HP): on
page 209.
On Windows
1. On the primary host:
DrivePath=C:\Temp\Mount SNAP-Volume
c. Move the disks containing the snapshot volume to a separate (split) disk group.
Disk group is also deported after this command completes:
vxdg -g DskGrp -n SPLIT-DskGrp split
\Device\HarddiskDmVolumes\diskgroup\snap_volume
a. Rescan to make the deported disk group visible on the secondary host:
vxassist rescan
b. Import the disk group that was deported from the primary:
vxdg -g split_diskgroup import
\Device\HarddiskDmVolumes\split_diskgroup \snap_volume
DrivePath=C:\Temp\Mount
VVR
The VVR snapshot method relies on the VERITAS Volume Replicator, which is a licensed
component of VxVM. The Volume Replicator maintains a consistent copy of data at a
remote site. Volume Replicator is described in the VERITAS Volume Replicator Administrators
Guide.
The VVR snapshot method can be used for alternate client backups only, in the data
replication configuration, described under Data Replication (UNIX Only) on page 13.
VVR makes use of the VxVM remote replication feature. The backup processing is done
by the alternate client at the replication site, not by the primary host or client.
Note VVR supports VxVM instant snapshots. For more information, refer to VxVM
Instant Snapshots on page 142.
Name Registration
Inband Control (IBC) messages are used to exchange control information between
primary and secondary hosts. A name has to be registered at both primary and secondary
host for each replicated volume group before IBC messaging can be used. The VVR
snapshot method assumes that the application name is APP_NBU_VVR. To avoid an
initial backup failure, you should register that name as described in step 1 on page 148.
Note If APP_NBU_VVR is not registered, NetBackup will register the name when the
first backup is attempted, but the backup will fail. Subsequent backups, however,
will succeed.
replication_link
3. On the secondary host, receive the IBC message from the primary host:
vxibc -g diskgroup -R10 receive APP_NBU_VVR replicated_group
NAS_Snapshot
NetBackup can make point-in-time snapshots of data on NAS (NDMP) hosts using the
NDMP V4 snapshot extension. The snapshot is stored on the same device that contains the
NAS client data. From the snapshot, you can restore individual files or roll back a file
system or volume by means of the Instant Recovery feature.
Note NetBackup for NDMP software is required on the server, and the NAS vendor must
support the NDMP V4 snapshot extension.
For help in setting up a policy for NAS snapshot, see the NAS Snapshot Configuration
chapter.
Configuration Checklist
This checklist includes major caveats and important information. READ THIS TABLE
before setting up your disk arrays for the array-specific snapshot methods. The right
column refers to sources of further information.
If you want your client data configured over Volume Manager Refer to the NetBackup Release Notes, or
volumes, make sure your arrays and operating system are see Advanced Client Information on
supported by Volume Manager (VxVM). the Web on page xix.
Make sure the client data is correctly mirrored to secondary disks See Configuring Primary and
in the array. Secondary Disks on page 156.
When configuring a backup policy, be sure to select a snapshot See The Snapshot Methods on
method that supports your arrays. page 151.
For NetBackup Media Server or Third-Party Copy Device offhost See Disk Configuration
methods: ask your array support technician to configure your Requirements on page 154.
array as follows:
The NetBackup clients must have access to primary and
secondary disks in the array.
The media server must have access to the secondary disks.
Solaris: If client data is configured over Volume Manager volumes, See Disk Configuration
label all secondary disks using the format command (label Requirements on page 154.
option).
Solaris: The EMC Symmetrix array must be configured in Common See Multiple Connectivity to EMC
Serial Number Mode to support multiple client SCSI and/or fibre Array: Common Serial Number mode
channel connections. on page 156
Do not include secondary disks in a Volume Manager disk group. See Disk Types on page 168.
Be sure to follow this and other restrictions when using Volume
Manager.
Read the Best Practices section. See Best Practices on page 172.
Overview
This section describes the array-specific snapshot methods provided in Advanced Client,
explains the need for data mirroring, and introduces terms used in this chapter.
As shown in the following table, each of the array methods must be used for its own
array-type.
Note As an alternative, the vxvm snapshot method can be used in backing up any of the
above disk arrays on either Solaris or HP, if the client data is configured over
Volume Manager volumes.
and
If the clients data is distributed across two or more primary disks by means of a VxVM
volume, an equal number of mirror disks must also contain the same data.
Disk Terms
The terms used in this manual for array disk mirroring are primary and mirror (or primary
and secondary). Some array vendors refer to these as follows:
EMC: The primary is called the standard, and the mirror is called a BCV.
Hitachi and HP: Primary and secondary are called primary volume and secondary
volume.
LAN
NetBackup NetBackup
client media server/
alternate client
Arrays
SCSI SCSI
primary mirror
disk disk
LAN
NetBackup NetBackup
client media server/
alternate client
Fibre Channel
Arrays
primary mirror
disk disk
LAN
NetBackup NetBackup
client media server
Fibre Channel
Bridge
SCSI
Arrays
primary mirror
disk disk
Caution If Common Serial Number Mode is not configured for an EMC disk array that
has multiple client and media server connections, the backup may fail.
Note If a mirror disk is not correctly associated and synchronized with the primary disk,
a snapshot of the clients data cannot be made. (A snapshot has to be made on the
mirror, not on the primary.) In that case, if the backup policy is configured with a
mirror-type snapshot, the backup will fail.
EMC Symmetrix
For an EMC Symmetrix disk array on the NetBackup client, you need to create device
groups, add primary and mirror (secondary) devices to the groups, and associate or pair
the primaries with the secondaries. Once associated, the secondary disks must be
synchronized with the primary disks. During synchronization, the primary disks are
copied to the secondaries.
Use the following commands.
Note Please refer to your EMC TimeFinder SYMCLI documentation for more details on
these commands.
symdg
Creates a disk group.
symld
Adds primary disk to the disk group.
symbcv
Associates a secondary disk with a primary disk.
1. Create a disk group that will contain any number of primary and secondary disks.
symdg create nbfim_test
Creates disk group named nbfim_test.
4. Synchronize the secondary disks with the primaries in the disk group.
symmir -g nbfim_test establish
Pairs, or associates, the primary with the mirror, and synchronizes the mirror with the
primary. If there are multiple primaries and mirrors, they are paired according to the
order in which they were added to the group.
1. Create a configuration file for your primary disks. Use this path and file name:
/etc/horcmX.conf
2. Create a configuration file for your mirror disks, using the same path and file name as
above, but with a different instance number.
For example: /etc/horcm1.conf
Following are two example files. Note that entries must be separated by spaces.
Except for comment lines (#), the file must contain the HORCM_MON, HORCM_CMD,
HORCM_DEV, and HORCM_INST parameters, followed by appropriate entries
(explained below).
HORCM_MON
Host where the
configuration file #host service poll(10ms) timeout(10ms)
resides.
turnip horcmgr0 1000 3000
Port name for
this instance.
HORCM_CMD
/dev/rdsk/c2t8d14s2
HORCM_DEV
HORCM_INST
#dev_group partner host partner service
Also enter this in wiltest turnip horcmgr1
/etc/services file
HORCM_MON
service: the port name of the RAID Manager instance (for this configuration file) to be
registered in the /etc/services file.
poll: the interval at which the disks are monitored, expressed as tens of milliseconds.
timeout: time-out period for attempting to communicate with the partner service,
expressed as tens of milliseconds.
HORCM_CMD
Below is sample output showing a command device file for an Hitachi device and for
an HP device.
Command
device files p=/dev/rdsk/c2t8d14s2 s=HITACHI:OPEN-9-CM:60159001C00
(note CM): p=/dev/rdsk/c2t5d35s2 s=HP:OPEN-3-CM:30436002500
The dev_group and dev_name parameters are used on the pair configuration
commands described later in this section.
port #: the port number specified for the disk, configured by means of the arrays
dedicated console (not from a NetBackup host).
Target ID: the SCSI or fibre channel target ID number of the disk, configured by
means of the arrays dedicated console (not from a NetBackup host).
LUN: the SCSI or fibre channel logical unit number of the disk, configured by means of
the arrays dedicated console (not from a NetBackup host).
MU: a numeric mirror descriptor for cascading disks (default 0). If you are not using
cascading disks, this value may be left blank. A cascading disk has more than one
mirror (secondary) associated with a given primary.
HORCM_INST
Note The partner service value must be entered in the /etc/services file.
parameters
(HORCM_MON, etc.) #host service poll(10ms) timeout(10ms)
Disk-related entries
refer to the
secondary disks. HORCM_CMD
#cmd_dev_file cmd_dev_file cmd_dev_file
/dev/rdsk/c2t8d14s2
Port name for this
instance
HORCM_DEV
secondary
disk wiltest dev2 CL2-A 16 33
HORCM_INST
/etc/services file
where x is the instance number of each configuration file. For the above example files, the
command would be:
/bin/horcmstart.sh 0 1
The daemons must be running in order to configure your primary and secondary disks.
HORCC_MRCF=1
setenv HORCC_MRCF 1
Resulting output:
Group PairVol L/R Port# TID LU-M Seq# LDEV# P/S Status % P-LDEV# M
Note If no primary-secondary associations (pairings) exist, all disks are listed as SMPL in
the P/S column. To create a primary-secondary pairing, see If disks are not
paired: on page 166.
Group
PairVol
Lists the devices by device name. In the above output, dev1 is listed twice: the first line is
the primary disk, the second is the mirror (secondary). This is shown under the P/S
column: P-VOL indicates the primary, S-VOL the secondary.
L/R
Indicates local or remote host, with respect to the current instance number.
Port#
The port number for the disk, configured by means of the arrays dedicated console (not
from a NetBackup host).
TID
The SCSI or fibre channel target ID number of the disk, configured by means of the arrays
dedicated console (not from a NetBackup host).
LU-M
LU indicates the SCSI or fibre channel logical unit number of the disk, configured by
means of the arrays dedicated console (not from a NetBackup host). M is the numeric
mirror descriptor for cascading disks. A cascading disk has more than one mirror
(secondary) associated with a given primary.
Seq#
This is the unit serial number of the array.
LDEV#
Logical device number of the disk.
P/S
Indicates whether or not the disk is configured in a primary-secondary pair:
SMPL: the disk is not paired (associated) with any other disk.
Status
PAIR: the secondary disk in the pair is synchronized with the primary.
PSUS: the pair is split (primary disk).
SSUS: the pair is split (secondary disk).
COPY: a synch or split is in progress. If synchronizing, the status changes to PAIR at
completion of the COPY; if splitting, the result is PSUS for primary disk, or SSUS for
secondary disk.
Note If a backup is attempted while a disk is split (PSUS, SSUS), the backup fails with a
status code 11. If a backup is attempted while a disk is in the COPY state, there are
two possible results: if the disks synchronize (shown as PAIR), the backup proceeds;
if the disks split (PSUS, SSUS), the backup fails with a status code 11.
%
Shows the percentage of the status that has completed.
P-LDEV#
The LDEV number of the partner disk in the pair.
M
Indicates whether the secondary is writable, as a result of being split from the primary.
Note If a mirror-type snapshot backup attempts to access a disk that is split or not paired,
the backup fails with a status code 11.
If disks are paired but need to be unpaired or otherwise reconfigured, you must split
them and create a new association.
where groupname is the name listed under dev_group, and dev_name is the device
name, as defined in the configuration files. To resynchronize the disks listed as split
(PSUS, SSUS) in the above example (see Resulting output: on page 164), enter:
pairresync -g wiltest -d dev2
1. To split the secondary disk from the primary but maintain the pair association, enter
the following:
pairsplit -g groupname -d dev_name
where groupname is the name listed under dev_group, and dev_name is the device
name, as defined in the configuration files. The pairdisplay command will show a
status of PSUS and SSUS.
For example:
pairsplit -g wiltest -d dev1
This splits the secondary from the primary in the dev1 pair.
2. To split the secondary disk from the primary and remove the pair association between
them, enter the following:
pairsplit -g groupname -d dev_name -S
where -S means break the pair association. The pairdisplay command will show
SMPL in the P/S column for the affected disks, meaning the disks are no longer
paired.
For more information on array configuration, refer to the documentation provided by
the arrays vendor.
Disk Label
On Solaris only: If client data is configured in Volume Manager volumes, be sure to label
all secondary devices using the format command (label option). Labeling the
secondary disks prevents Volume Manager from marking the disks as disabled (if they are
split from their primary disks) during a system reboot.
While a secondary disk is synchronized with its primary, the secondary is invisible to
Volume Manager. When the secondary is split off from its primary disk, the secondary
becomes visible again. If the secondaries are labeled (using the format label
command), Volume Manager will not disable the disks when they are split.
Disk Types
There are important restrictions involving the use of Volume Manager with Advanced
Client.
Note If these restrictions are not observed, the backup will fail.
Fibre Channel
P1 M1 P3 M3 P5 M5
P2 M2 P4 M4 P6 M6
As shown above, no secondary (mirror) disks should be included in VxVM disk groups,
and groups must contain disks of the same vendor.
Note These restrictions apply when using any of the array-specific snapshot methods;
they do NOT apply if you are using the vxvm snapshot method.
EMC Hitachi HP
array array array
For alternate client backup, the temporary disk group is named as follows:
client_name_diskgroup_name_clone
While the backup is in progress, this clone appears in the output of the Volume Manager
vxdg command. This is normal. When the backup completes, NetBackup automatically
removes the disk group clone.
vol01 on vol01 on
primary_disk1 mirror_disk1
Best Practices
The recommendations in this section apply primarily to the use of the array-specific
snapshot methods and Volume Manager, except where noted.
This can be a problem if you are attempting to restore a snapshot of one of the
file systems or one of the VxVM volumes that share the same disk: the other file
systems or volumes sharing the disk may have older data that you do not want
to write back to the primary. When the hardware-level disk restore takes place,
the older data will replace the newer data on the primary disk.
Backup policy A
partition 0,
Backup Selections list:
/file_sys1 Backup policy B
/file_sys1
Backup Selections list:
partition 1,
/file_sys2 /file_sys2
Note Snapshot disk locks are applied to the entire disk: when a backup job requires a
snapshot, the entire disk is locked.
To avoid this conflict, see Avoiding Concurrent Access Conflicts later in this chapter.
The above diagram shows /dev/vx/rdsk/dg/vol_1 on a single disk. The same conflict
will occur if /vol_1 is distributed across two or more disks.
To avoid this conflict, see Avoiding Concurrent Access Conflicts later in this chapter.
though the two backups are attempting to access different volumes. This happens because
the array-specific snapshot methods split the mirror disk from the primary disk at the disk
device layer, not at the volume layer.
Backup Policies in Conflict: Two Backups Accessing Volumes Distributed on Same Disks
Backup policy B
/dev/rdsk/cxt1dxs2
Backup Selections list:
/dev/vx/rdsk/dg/vol_2
Backup policy A starts to back up /vol_1; both disks A and B are locked to
make a snapshot of /vol_1. Backup policy B attempts to back up /vol_2 and
requests snapshot: disks A and B are already lockedaccess denied.
If the data to back up is configured in Volume Manager (VxVM) volumes, use the
vxvm snapshot method. The vxvm method allows snapshot backups to run
concurrently without conflicts, provided that the backup data consists of file systems
mounted on VxVM volumes. See Creating a Snapshot Mirror of the Source on
page 141 for help with the vxvm method.
Use the Volume Manager administration interface to determine which disks the
volumes are configured on, and configure the volumes on different disks.
177
Disk Requirements for Media Server/ Third-Party Copy
ALL_LOCAL_DRIVES
The policys Backup Selections list must not contain the ALL_LOCAL_DRIVES entry.
Storage Units
Any_available is not supported for NetBackup Media Server and Third-Party Copy
Device backup methods.
Disk storage units are not supported for the Third-Party Copy Device method.
Multiplexing
The Third-Party Copy Device backup method is incompatible with multiplexing (the
writing of two or more concurrent backup jobs to the same storage device). To prevent
multiplexing on a third-party copy backup, you must set Maximum multiplexing per
drive to 1 (on the Add New Storage Unit or Change Storage Unit dialog).
/dev/rdsk/c1t3d0s3
HP:
/dev/rdsk/c1t0d0
Performing a Backup
Performing a Restore
181
Performing a Backup
Performing a Backup
Before proceeding, please note the following for the array integration snapshot methods.
For the EMC TimeFinder, Hitachi ShadowImage, or HP BusinessCopy snapshot method,
the client data to be backed up must reside on a mirror disk made by the corresponding
vendor (EMC, Hitachi, or HP). Assistance from the disk array vendors technical support
may be required. For NetBackup-related items, refer to the chapter titled Snapshot
Configuration Notes.
Automatic Backup
The most convenient way to back up client data is to configure a policy and then set up
schedules for automatic, unattended backups. To use NetBackup Advanced Client, you
must enable snapshot backup as described in the appropriate configuration chapter of this
guide. To add new schedules or change existing schedules for automatic backups, you can
follow the guidelines in the NetBackup System Administrators Guide.
Manual Backup
The administrator can use the NetBackup Administration interface on the master server to
execute a backup for a policy. To use NetBackup Advanced Client, you must enable
snapshot backup as described in the appropriate configuration chapter of this guide.
See the NetBackup System Administrators Guide for instructions on doing manual backups.
Performing a Restore
You can use the Backup, Archive, and Restore interface to restore individual files or
directories, or a volume or raw partition. See the NetBackup Backup, Archive, and Restore
Getting Started Guide for instructions on performing the restore. The following sections
include restore notes and procedures unique to certain components of Advanced Client.
Note In the Backup, Archive, and Restore interface, set the policy type to FlashBackup
for UNIX clients and FlashBackup-Windows for Windows clients.
After a raw partition restore of a VxFS file system, a file system consistency check
(fsck) is usually required before the file system can be mounted.
For Windows clients
To restore an entire raw partition, ensure that the partition is mounted
(designated as a drive letter) but not in use. (For this reason, you cannot perform a
raw partition restore to the root partition, or to the partition on which NetBackup
is installed.) If the partition is being used by a database, shut down the database.
The partition must be the same size as when it was backed up; otherwise, the
results of the restore are unpredictable.
1. Stop all applications (on any nodes) that are using the file system.
5. Share the mounted file system again, and restart applications (if any).
Important Notes
Block-level restore requires the VxFS File System.
Block-level restore is available only when restoring files to the original location on the
client, AND when the snapshot method used for the Instant Recovery backup was
VxFS_Checkpoint.
If the snapshot method for the backup was VxFS_Checkpoint and the files to be
restored are in an Oracle database, block-level restore is automatically enabled.
After this file is created, all subsequent restores of the clients data will use block-level
restore.
Note When block-level restore is activated, it is used for all files in the restore. This may
not be appropriate for all of the files. It may take longer to restore a large number of
small files, because they must first be mapped.
method for a UNIX client, large files that have had many changes since the backup can be
recovered more quickly by means of file promotion. File promotion optimizes single-file
Notes on VxFS_Checkpoint:
File promotion requires the VxFS File System version 4.0 or later.
File promotion can be done only from the last Instant Recovery snapshot that was
made with the VxFS_Checkpoint method.
File promotion is available only when restoring files to the original location on the
original client.
Notes on NAS_Snapshot:
File promotion is available when restoring to the original volume on the original
client.
File promotion can be done from older snapshots, but any newer NAS snapshots
are deleted after the file promotion takes place.
The file system requirements depend on the NAS vendor. For further
requirements specific to your NAS vendor, see the NetBackup for NDMP Supported
OS and NAS Appliance Information online document (refer to the preface for help
accessing that document).
Notes on Rollback
Rollback can be done only from backups that were enabled for Instant Recovery and
made with the VxFS_Checkpoint, vxvm, or NAS_Snapshot methods.
For backups made with the VxFS_Checkpoint method, rollback requires the VxFS File
System 4.0 or later and Disk Layout 6. For NAS_Snapshot, the file system
requirements depend on the NAS vendor.
Rollback deletes any snapshots (and their catalog information) that were created after
the creation-date of the snapshot that you are restoring.
Rollback deletes all files that were created after the creation-date of the snapshot that
you are restoring. Rollback returns a volume to a given point in time. Any data
changes or snapshots that were made after that time are lost.
Rollback is available only when restoring the file system or volume to the original
location on the client.
When a rollback of a file system is initiated, NetBackup verifies, by default, that the
primary file system does not contain files created after the snapshot was made;
otherwise, the rollback aborts.
3. Click Actions > Specify NetBackup Machines to specify the server, source client,
policy type, and destination client.
Instant Recovery backups are displayed in the Backup History window, for all dates
(you cannot set a range).
6. In the Directory Structure list, click the check box next to the root node or a mount
point beneath root.
You can select a file system or volume, but not lower-level components.
Select this
option only if you
are sure you
want to replace
all files in the
original location.
The only available destination option is Restore everything to its original location.
8. For file systems, you can choose to skip file verification by placing a check in the
Overwrite existing files option.
Caution Click on Overwrite existing files only if you are sure you want to replace all the
files in the original location with the snapshot. Rollback deletes all files that
were created after the creation-date of the snapshot that you are restoring.
If Overwrite existing files is not selected, NetBackup performs several checks on the
file system as described under Notes on Rollback on page 187. If the checks do not
pass, the rollback aborts and a message is written to the Task Progress tab stating that
rollback could not be performed because file verification failed.
The rest of the procedure is identical to a normal restore as explained in the NetBackup
Backup, Archive, and Restore Getting Started Guide.
2. From the Select for Restore drop-down list, select Restore from Point in Time
Rollback.
3. Click File > Specify NetBackup Machines and Policy Type to specify the server,
source client, policy type, and destination client.
5. In the All Folders pane, click the check box next to the root node or a mount point
beneath root.
You can select a file system or volume, but not lower-level components.
Select this
option only if you
are sure you
want to replace
all files in the
original location.
Note The only destination option is Restore everything to its original location.
7. For file systems, you can choose to skip file verification by placing a check in the
Overwrite the existing file option.
Caution Click on Overwrite the existing file only if you are sure you want to replace all
the files in the original location with the snapshot. Rollback deletes all files that
were created after the creation-date of the snapshot that you are restoring.
If Overwrite the existing file is not selected, NetBackup performs several checks on
the file system as described under Notes on Rollback on page 187. If the checks do
not pass, the rollback aborts and a message is written to the progress log stating that
rollback could not be performed because file verification failed.
The remainder of the procedure is identical to a normal restore as explained in the
NetBackup Backup, Archive, and Restore Getting Started Guide.
LAN / WAN
2 media
client
SCSI server
client
disks SCSI
3
1
storage
1. Media server reads data from local storage.
2. Media server sends the data to the client over the LAN.
3. Client restores the data to disk (disk can be locally
attached or on SAN).
Restore over the SAN to a host acting as both client and media server. This requires
the FORCE_RESTORE_MEDIA_SERVER option in the servers bp.conf file (see the
NetBackup System Administrators Guide for details on this option).
LAN / WAN
client/media media
server server
SAN
2
Restore directly from a snapshot (this is not the Instant Recovery feature): if the Keep
snapshot after backup option was turned on for the backup, the data can be restored
from a mirror disk by restoring individual files from the snapshot, or by restoring the
entire snapshot. Note that this type of restore must be done from the command
prompt (using, for instance, a copy command such as UNIX cp), not from the
NetBackup Administration Console.
For details, refer to Restoring from a Disk Snapshot on page 193.
Note Unless the backup was made with the Instant Recovery feature, you cannot restore
from a snapshot by means of the Backup, Archive, and Restore interface; you must
perform the restore manually at the command line.
On UNIX
1. To list the identifiers of current snapshots, use the bpfis command with the query
option:
/usr/openv/netbackup/bin/bpfis query
This returns the ID (FIS IDs) of all current snapshots. For example:
INF - BACKUP START 3629
completed
2. For each snapshot identifier, enter bpfis query again, specifying the snapshot ID:
/usr/openv/netbackup/bin/bpfis query -id 1036458302
This returns the path of the original file system (snapshot source) and the path of the
snapshot file system. For example:
OPTIONS:ALT_PATH_PREFIX=/tmp/_vrts_frzn_img_26808,FITYPE=MIRROR,
MNTPOINT=/mnt/ufscon,FSTYPE=ufs
In this example, the primary file system is /mnt/ufscon and the snapshot file
system is /tmp/_vrts_frzn_img_26808/mnt/ufscon.
For further examples using bpfis, see the Managing Snapshots from the Command
Line appendix in this manual.
3. Copy the files from the mounted snapshot file system to the original file system.
If the snapshot method was FlashSnap, you can restore the snapshot volume as follows:
1. Unmount the snapshot source (original file system) and the snapshot file system on
the alternate client:
umount original_file_system
umount snapshot_image_file_system
To locate the file systems, refer to step 1 and step 2 on page 193.
If vxdg list does not show the disk group, the group might have been
deported. You can discover all the disk groups, including deported ones, by
entering:
vxdisk -o alldgs list
The disk groups listed in parentheses are not imported on the local system.
3. Import and join the VxVM disk group on the primary (original) client:
vxdg import SPLIT-primaryhost_diskgroup
vxrecover -g SPLIT-primaryhost_diskgroup -m
4. Start the volume and snap back the snapshot volume, using the
-o resyncfromreplica option:
vxvol -g SPLIT-primaryhost_diskgroup start SNAP_diskgroup_volume
Caution Hardware-level disk restore (such as by means of the symmir command with
the -restore option) can result in data loss if the primary disk is shared by
more than one file system or more than one VxVM volume. The hardware-level
restore overwrites the entire primary disk with the contents of the mirror disk.
This can be a problem if you are attempting to restore a snapshot of one of the
file systems or one of the VxVM volumes that share the same disk: the other file
systems or volumes sharing the disk may have older data that you do not want
to write back to the primary. When the hardware-level disk restore takes place,
the older data will replace the newer data on the primary disk.
On Windows
1. To list the identifiers of current snapshots, use the bpfis command with the query
option:
/usr/openv/netbackup/bin/bpfis query
This returns the ID (FIS IDs) of all current snapshots. For example:
INF - BACKUP START 3629
completed
2. For each snapshot identifier, enter bpfis query again, specifying the snapshot ID:
/usr/openv/netbackup/bin/bpfis query -id 1036458302
This returns the path or the original file system (snapshot source) and the GUID
(Global Universal Identifier) representing the snapshot volume. For example:
INF - BACKUP START 2708
\\?\Volume{54aa666f-0547-11d8-b023-00065bde58d1}\
OPTIONS:ALT_PATH_PREFIX=C:\Program Files\VERITAS\NetBackup\
Temp\_vrts_frzn_img_2408,FITYPE=MIRROR,MNTPOINT=H:\,FSTYPE=NTFS
In this example the snapshot file system is H:\ and the GUID is
\\?\Volume{54aa666f-0547-11d8-b023-00065bde58d1}\.
\\?\Volume{54aa666f-0547-11d8-b023-00065bde58d1}\
b. Copy the file to be restored from the temporary snapshot mountpoint (in this
example, C:\Temp\Mount)to the primary volume.
If the snapshot method was FlashSnap, you can restore the snapshot volume as follows:
2. Import and join the VxVM disk group on the primary (original) client:
vxassist rescan
Important Notes
Particular Issues
Removing a Snapshot
Note For explanations of NetBackup status codes, refer to the NetBackup Status Codes
and Messages chapter in the NetBackup Troubleshooting Guide.
199
Gathering Information and Checking Logs
Note To create detailed log information, place a VERBOSE entry in the bp.conf file on
the NetBackup master and client, or set the Global logging level to a high value in
the Logging dialog, under both Master Server Properties and Client Properties.
Note These directories can eventually require a lot of disk space. Delete them when you
are finished troubleshooting and remove the VERBOSE option from the bp.conf
file or reset the Global logging level to a lower value.
Note To create detailed log information, set the Global logging level to a high value, in
the Logging dialog, under both Master Server Properties and Client Properties.
Note The log folders can eventually require a lot of disk space. Delete them when you are
finished troubleshooting and set the logging level on master and client to a lower
value.
Important Notes
The disk containing the clients data (the files to back up) must be a SCSI or Fibre
Channel device if you are using NetBackup Media Server or Third-Party Copy
Device.
The disk containing the clients data must be visible to both the client and the media
server if you are using the NetBackup Media Server or Third-Party Copy Device
method. The disk can be connected through SCSI or fibre channel.
For the NetBackup Media Server or Third-Party Copy Device method, a disk device
must be able to return its SCSI serial number in response to a serial-number inquiry
(serialization), or the disk must support SCSI Inquiry Page Code 83.
When configuring the Third-Party Copy Device or NetBackup Media Server method,
a particular storage unit or group of storage units must be specified for the policydo
not choose Any_available. For configuration instructions, refer to Configuring an
Advanced Client Policy on page 85.
The storage_unit_name portion of a mover.conf.storage_unit_name file name
must exactly match the actual storage unit name (such as nut-4mm-robot-tl4-0)
that you have defined for the policy. See Naming the Mover File on page 73 for help
creating a mover.conf.storage_unit_name file.
Similarly, the policy_name portion of a mover.conf.policy_name file name must
match the actual name of the policy that the third-party copy device is to be associated
with.
For the TimeFinder, ShadowImage, or BusinessCopy snapshot methods, the client
data must reside in a device group, with the data on the primary disk and
synchronized with a mirror disk. Assistance from the disk array vendor may also be
required. Refer to Array-Specific Snapshot Methods on page 150.
If the Keep snapshot after backup option for the snapshot method is changed from
yes to no, the last snapshot created for that policy must be deleted manually before
the backup is run again. Use the bpfis command to delete the snapshot. Refer to the
man page for bpfis, or to Managing Snapshots from the Command Line in this
guide.
During a third-party copy device backup, if tape performance is slow, increase the
buffer size by creating one of the following files on the media server:
/usr/openv/netbackup/db/config/SIZE_DATA_BUFFERS_TPC.policy_name
/usr/openv/netbackup/db/config/SIZE_DATA_BUFFERS_TPC.storage_unit_name
/usr/openv/netbackup/db/config/SIZE_DATA_BUFFERS_TPC
By default, the size of the data buffer for third-party copy backup is 65536 bytes (64K).
To increase it, put a larger integer in the SIZE_DATA_BUFFERS_TPC file. For a buffer
size of 96K, put 98304 in the file. If the value is not an exact multiple of 1024, the value
read from the file will be rounded up to a multiple of 1024.
The file name with no extension (SIZE_DATA_BUFFERS_TPC) applies as a default to
all third-party copy backups, if neither of the other file-name types exists. A
SIZE_DATA_BUFFERS_TPC file with the .policy_name extension applies to backups
executed by the named policy, and the .storage_unit_name extension applies to
backups using the named storage unit. If more than one of these files applies to a
given backup, the buffers value is selected in this order:
SIZE_DATA_BUFFERS_TPC.policy_name
SIZE_DATA_BUFFERS_TPC.storage_unit_name
SIZE_DATA_BUFFERS_TPC
As soon as one of the above files is located, its value is used. A .policy_name file that
matches the name of the executed policy will override the value in both the
.storage_unit_name file and the file with no extension. The .storage_unit_name file will
override the value in the file with no extension.
You can set the maximum buffer size that a particular third-party copy device can
support.
Note A third-party copy device will not be used if it cannot handle the buffer size set for
the backup.
Particular Issues
Installation Problem
If you receive the following message during installation:
/usr/openv/netbackup/bin/version not found.
you have tried to install the Advanced Client software before installing the base
NetBackup software.
Specify a larger cache partition, or designate additional cache partitions in the Backup
Selections list. See the FlashBackup Configuration chapter in this guide for cache
partition requirements.
On Solaris: If your cache partition runs out of space, there may be stale snapshots
taking up space on the cache partition. Stale snapshots are those that were not
automatically deleted by FlashBackup.
a. Determine if there are stale snapshots on your Solaris client by executing the
following:
/usr/openv/netbackup/bin/driver/snaplist
b. For each snapshot listed, execute the following to make sure there is a bpbkar
process associated with it:
ps -eaf |grep ident
where ident is the snapshot process id displayed by the snaplist command.
c. Remove snapshots that do not have an associated bpbkar process by entering the
following:
/usr/openv/netbackup/bin/driver/snapoff snapn
where snapn is the snapshot id displayed by the snaplist command.
Removing a Snapshot
NetBackup ordinarily removes snapshots after the Advanced Client backup completes
(unless the Keep snapshot after backup parameter was set to Yes). However, as a result
of certain kinds of system failures, such as a system crash or abnormal termination of the
backup, the snapshot may not be removed.
1. Use the bpfis command with the query option to list the current snapshots:
Do this on the client or alternate client, depending on the type of backup:
/usr/openv/netbackup/bin/bpfis query
This returns the IDs (FIS IDs) of all current snapshots. For example:
INF - BACKUP START 3629
completed
If bpfis removed the snapshot, you can skip the rest of this procedure.
3. Solaris or HP only: if bpfis could not remove the snapshot, enter the following (on the
client or alternate client) when no backups are running:
df -k
This displays all mounted file systems, including any snapshots of a mounted file
system.
4. Solaris or HP only: unmount the unneeded snapshot file systems (on the client or
alternate client, depending on the type of backup).
\Device\HarddiskDmVolumes\diskgroup\snap_volume
a. Enter the following VxFS command to display the name of the checkpoint:
/usr/lib/fs/vxfs/fsckptadm list /file_system
where file_system is the mount point of the primary or original file system that
was backed up, NOT the snapshot file system that was unmounted in step 4.
For example, if the snapshot file system that was unmounted is the following:
/tmp/_vrts_frzn_img__vm2_1765
The original file system, which should be specified on the fsckptadm list
command, is this:
/vm2
Example entry:
/usr/lib/fs/vxfs/fsckptadm list /vm2
Output:
/vm2
NBU+2004.04.02.10h53m22s:
flags = removable
a. To discover and remove any VxVM clones, follow the steps under Removing a
VxVM Volume Clone on page 211.
Using the mounted file system found at step 3, unmount the snapshot as follows:
umount -F vxfs /tmp/_vrts_frzn_img__filesystemname_pid
If vxdg list does not show the disk group, the group might have been deported.
You can discover all the disk groups, including deported ones, by entering:
vxdisk -o alldgs list
The disk groups listed in parentheses are not imported on the local system.
vxrecover -g SPLIT-primaryhost_diskgroup -m
Example
In this example, chime is the primary client and rico is the alternate client. 1hddg is
the name of the original disk group on chime, and chime_lhddg is the split group
that was imported on rico and must be rejoined to the original group on the primary
chime.
On alternate client rico, enter:
vxdg deport chime_lhddg
vxrecover -g chime_lhddg -m
(stdout):
(stderr):
19:13:07.687 [14981] <2> onlfi_vfms_logf: INF - clone group and volume already exists
In this case, you must use the bpdgclone command with the -c option to remove the
clone, and then resynchronize the mirror disk with the primary disk.
1. When no backups are running, use the following VxVM command to list any clones.
vxdg list
In this example, the name suffix indicates wil_test_clone was created for a
snapshot backup that was configured with an array-specific snapshot method. If a
backup failed with log entries similar to those included above, the clone must be
manually deleted.
where wil_test is the name of the disk group, vol01 is the name of the VxVM
volume, and wil_test_clone is the name of the clone. Use the Volume Manager
vxprint command to display volume names and other volume information.
For more information, refer to the bpdgclone man page. For assistance with
vxprint and other Volume Manager commands, refer to the VERITAS Volume
Manager Administrators Guide.
3. To verify that the clone has been removed, re-enter vxdg list.
Sample output:
NAME STATE ID
rootdg enabled 983299491.1025.turnip
VolMgr enabled 995995264.8366.turnip
wil_test enabled 983815798.1417.turnip
The clone no longer appears in the list.
Line A
This appendix presents basic procedures for creating and managing snapshots using the
NetBackup command-line interface. As compared to the NetBackup Administration
Console, use of NetBackup commands (such as in a backup script) provides additional
flexibility that may be desirable in certain circumstances.
Note The command for creating a snapshot is bpfis, which is described in man page
format in the NetBackup Commands manual.
213
Reasons for Using the Command Line
1. A snapshot source is the entity containing the data to be captured, where entity means raw
disk partition, volume, or file system.
Examples
The following examples show how to use the command line interface to create and back
up a snapshot.
Note The man pages for bpfis and other NetBackup commands are in the NetBackup
Commands manual. For a list of the snapshot options available with bpfis, refer to the
<opt_params> area of each snapshot method (FIM) listed in the
/usr/openv/netbackup/vfm.conf file.
Note All commands in this example are executed on the (primary) client.
You must have root privileges to execute the bpfis command. bpfis is located in
/usr/openv/netbackup/bin.
where my_id is a user-defined identifier for the snapshot, cache= designates a raw
partition for the copy-on-write mechanism used by the nbu_snap method, and /mnt
is the source for the snapshot (the file system to back up).
In bpfis output, the original mount point of the snapshot source (/mnt) and the
active mount point of the snapshot are embedded in the directive labeled REMAP
FILE, where USING /path_name indicates the mount point of the snapshot. The
OPTIONS directives are internal to NetBackup and should not be changed. These
directives are also written to file /tmp/filelist.my_id. (This file is written on the
alternate backup client if alternate client backup is employed.)
> done
The above command takes the directives from the /tmp/filelist.my_id file
(created by bpfis) and adds them to the policy.
3. Make sure the policy is enabled for snapshots and that the backup copy type is local
(not offhost).
bpplinfo mypol -modify -bc 0 -fi 1
where -bc specifies that the backup copy method is local and -fi specifies enable
snapshots for this policy.
4. Make sure the NetBackup catalog is updated by listing policy attributes and the
Backup Selections list.
bpplinfo mypol -L
bpplinclude mypol -l
The output should include snapshot: yes, and Backup Copy: 0, where 0 indicates
local.
where -i means make the backup immediately, -w 0 means wait indefinitely for
completion status from the server before returning to the system prompt, and -p
specifies the policy.
Note You must have root privileges to execute the bpfis command.
bpfis is located in /usr/openv/netbackup/bin.
where my_id is a user-defined identifier for the snapshot, -rhost designates the
alternate client named alt, and /mnt is the source for the snapshot on the primary
client (the file system to back up).
For a brief description of bpfis output, refer to step 1 on page 215.
> done
3. Make sure the policy is enabled for snapshots and that the backup copy type is local
(not offhost).
bpplinfo mypol -modify -bc 0 -fi 1
where -bc specifies that the backup copy method is local and -fi specifies enable
snapshots for this policy.
4. Make sure the NetBackup catalog is updated by listing policy attributes and the
Backup Selections list.
bpplinfo mypol -L
bpplinclude mypol -l
The output should include snapshot: yes, and Backup Copy: 0, where 0 indicates
local.
where -i means make the backup immediately, -w 0 means wait indefinitely for
completion status from the server before returning to the system prompt, and -p
specifies the policy.
219
Processing Before and After the Snapshot
The means of coordination is called quiesce (literally, to make quiet or place in repose). This
involves pausing the database application or process until the data is transactionally
consistent. Applications and the storage management stack must all be quiesced before a
useful snapshot can be made.
2
Finish transactions.
3
Quiesce acknowledge.
File System
Two of the principal tasks of quiescing the file system are the following:
Prohibit new I/O requests from initiating, which is called locking the file system.
Flush file system cache (write cached data back to disk). The system must complete
any outstanding application I/O and note completion of outstanding metadata
updates.
Volume Manager
As in a file system, the volume managers data caching may have to be flushed and
disabled until the snapshot is created. As long as volume manager caching is enabled,
data required for a consistent image may be lingering in volume manager cache rather
than being available on disk when the snapshot is created.
Local Client
A FlashBackup image can be backed up from local disks to a storage device on the media
server, using the local client. If Perform snapshot backups is not selected on the policy
Attributes display, nbu_snap (on Solaris) or VxFS_Snapshot (on HP) creates the snapshot.
If Perform snapshot backups is selected, a different snapshot method can be configured.
In either case, a fibre channel network or SAN is not required for a local backup.
The following diagram shows a network configuration sufficient for a local client backup.
The network configuration is identical to that for normal NetBackup.
NetBackup master
server
1
LAN / WAN
3 4
Client Media
SCSI
server
2 SCSI
5
storage
Client disks
1. Client backup is initiated by master server, which tells the NetBackup client to create the
snapshot data on disk.
2. Client creates file system map of the snapshot.
3. Client sends the file system map to the media server.
4. Client sends the backup data to the media server.
5. Media server writes data to local storage.
Note If you have a multi-ported SCSI disk array, a fibre channel SAN is not required. See
Offhost Backup Without a SAN (UNIX Only) on page 16.
LAN / WAN
1 4 3
NetBackup NetBackup NetBackup
master client media server
server
2 3 6
5
Fibre Channel/SAN
Robot on
SAN
Disks of client
data on SAN
1. Backup is initiated by master server, which tells the client to create the snapshot data on the disk.
2. Client creates file system map of the snapshot and sends the file system map over the LAN to the
media server.
3. Media server reads the file system map over the LAN and writes it to tape storage.
4. Client creates the disk block map over the snapshot and sends the disk block map over the LAN
to the media server.
5. Media server reads the data over the SAN from the disk block map provided by the client.
6. Media server writes data across the SAN to storage.
LAN / WAN
1 4 3
NetBackup NetBackup NetBackup
master client media server
server
2 5
3
Fibre Channel/SAN
Robot on
7 SAN
6 Third-party
copy device
Disks of client SCSI
data on SAN
1. Client backup is initiated by master server, which tells the NetBackup client to create the snapshot data
on the disk.
2. Client creates file system map of snapshot and sends file system map over the LAN to the media server.
3. Media server reads the file system map over the LAN and writes it to tape storage.
4. Client creates the disk block map over the snapshot and sends the disk block map over the LAN to the
media server.
5. Media server creates third-party copy commands based on disk block map and sends the commands to
the third-party copy device over the SAN.
6. The third-party copy commands instruct the third-party copy device to read the client data from either
SAN-attached or SCSI-attached disk.
7. The third-party copy commands instruct third-party copy device to write the client data to SAN-attached
or SCSI-attached storage.
Master
server Admin Backup
bprd
Console Request
1
bpsched bpdbm
Media server 2
bpcd Tape or optical
device
bptm
bpbrm 6
5
4
bpbkar
Backup Request 3
jbpSA or
bpfis bpfis
bp
1. The NetBackup master server or primary client initiates the backup, causing the
NetBackup request daemon bprd to start the scheduler, bpsched. bpsched
processes the policy configurations depending on the initiator of the backup
(scheduled, immediate manual, or user directed). Refer to Appendix A of the
NetBackup Troubleshooting Guide for more information on this stage of the backup
operation.
2. bpsched uses bpcd (client daemon) to start the backup/restore manager (bpbrm) on
the media server.
3. bpbrm starts bpfis on the primary client and creates a snapshot on the alternate
client.
5. bpbrm starts bptm on the media server to send the backup to the storage device.
6. A standard backup occurs on the alternate client. bpbrm optionally removes the
snapshot.
1 Solaris client
Master Backup
server Request
jnbSA or bprd
bpadm jbpSA or
Backup bp
Request
bpdbm
bpsched
Media server
Fil
e
inf
bpcd 2
or
ma
tio
n
bpcd
3
bpbrm File
infor 4
mati
bptm on
(parent) bpbkar
Shared
bptm Make
Memory list
Extent
Tape
(child) snapshot
request
7
Ba
extent
ck
9
up
10 list info 5
snapshot
i
ma
mapping
6 services
ge
Ba
8 services
ltid
cku
File/volume
mo
p im
un mapping
t info snapshot
age
created
1. The NetBackup master server or client initiates the backup, causing the NetBackup
request daemon bprd to start the scheduler, bpsched. bpsched processes the policy
configurations depending on the initiator of the backup (scheduled, immediate
manual, or user directed). Refer to Appendix A of the NetBackup Troubleshooting Guide for
more information on this stage of the backup operation.
2. bpsched uses bpcd (client daemon) to start the backup/restore manager (bpbrm) on
the media server.
3. bpbrm starts the Media Manager process bptm (parent) and also starts the actual
backup by using bpcd on the client to start the clients backup and archive program
bpbkar.
4. bpbkar sends information about files within the image to the backup/restore
manager bpbrm, which directs the file information to bpdbm for the NetBackup file
database on the master server.
5. bpbkar requests creation of a snapshot of the clients active data. bpbkar uses the
snapshot method that was configured for the snapshot source.
6. bpbkar requests file/volume mapping information about the client data. bpbkar
uses one or more mapping services to decompose the clients data into physical disk
addresses (also referred to as disk extents). The file/volume mapping information (list
of extents) comes from one of two places: the clients active (primary) data, and from
the snapshot of the client data (cached).
7. On the media server, bptm creates a child process, which reads the mapping
information (extent list) from bpbkar.
8. Based on the extent list received from bpbkar, bptm reads the client data (backup
image) from two places: from the clients active data (for those blocks that have not
changed since the backup was initiated), and from the snapshot cache (to obtain the
original contents of the blocks that have changed since the backup was initiated).
9. The bptm child stores the client data block-by-block in shared memory.
10. The parent bptm process then takes the backup image from shared memory and
sends it to the storage device. For information on how the tape request is issued, refer
to Appendix A of the NetBackup Troubleshooting Guide.
bpdbm
bpsched
Media server
Fil
ei
nfo
bpcd 2
r
ma
tio
n
bpcd
3
bpbrm File
in 4
form
bptm ation
(parent) 7 bpbkar
Shared Make
bptm ts
Memory Disk exten snapshot
Tape
(child)
request
g )
Ba
(mappin Disk
ck
9 5
up
10 extents
snapshot
im
mapping
services
ag
6
Ba
8 services
e
ltid
cku
mo
p
un
ima
t snapshot
File/volume
ge
mapping created
info
1. The NetBackup master server or client initiates the backup, causing the NetBackup
request daemon bprd to start the scheduler, bpsched. bpsched processes the policy
configurations depending on the initiator of the backup (scheduled, immediate
manual, or user directed). Refer to Appendix A of the NetBackup Troubleshooting Guide for
more information on this stage of the backup operation.
2. bpsched uses bpcd (client daemon) to start the backup/restore manager (bpbrm) on
the media server.
3. bpbrm starts the Media Manager process bptm (parent) and also starts the actual
backup by using bpcd on the client to start the clients backup and archive program
bpbkar.
4. bpbkar sends information about files within the image to the backup/restore
manager bpbrm, which directs the file information to bpdbm for the NetBackup file
database on the master server.
5. bpbkar requests creation of a snapshot of the clients active data. bpbkar uses the
snapshot method that was configured for the snapshot source.
6. bpbkar requests file/volume mapping information about the client data. bpbkar
uses one or more mapping services to decompose the clients data into physical disk
addresses (also referred to as disk extents). The file/volume mapping information (list
of disk extents) comes from the snapshot of the client data.
7. On the media server, bptm creates a child process, which reads the mapping
information (disk extent list) from bpbkar.
8. Based on the extent list received from bpbkar, bptm reads the client data (backup
image) from the snapshot on the mirror (secondary) disk.
9. The bptm child stores the client data block-by-block in shared memory.
10. The parent bptm process then takes the backup image from shared memory and
sends it to the storage device. For information on how the tape request is issued, refer
to Appendix A of the NetBackup Troubleshooting Guide.
Media server
Fil
ei
nfo
bpcd 2
rm
ati
on
bpcd
3
bpbrm F ile 4
infor
mati
on
bptm
bpbkar
Extent list Make
Tap 7
e extent snapshot
req
ue st
8 list info
ltid mapping
Third-party snapshot 5
services services
copy command mount
with extent list 6
10 File/volume
mapping info
ge
up ima
k Tape or optical
Bac snapshot
device Active
created
Third-party copy client data
device
Backup image
9
snapshot of
client data
(cached)
1. The NetBackup server or client initiates the backup, causing the NetBackup request
daemon bprd to start the scheduler, bpsched. bpsched processes the policy
configurations depending on the initiator of the backup (scheduled, immediate
manual, or user directed). Refer to Appendix A of the NetBackup Troubleshooting Guide for
more information on this stage of the backup operation.
2. bpsched uses bpcd (client daemon) to start the backup/restore manager (bpbrm) on
the media server.
3. bpbrm starts the Media Manager process bptm and also starts the actual backup by
using bpcd on the client to start the clients backup and archive program bpbkar.
4. bpbkar sends information about files within the image to the backup/restore
manager bpbrm, which directs the file information to the NetBackup file database on
the master server.
6. bpbkar requests file/volume mapping information about the client data. bpbkar
uses one or more mapping services to decompose the clients data into physical disk
addresses (also referred to as disk extents). This file/volume mapping information (list
of extents) comes from one of two sources: the clients active (primary) data, or from
the snapshot of the client data (cached).
8. bptm sends the third-party copy command with the extent list to the third-party copy
device. For information on how the tape request is issued, refer to Appendix A of the
NetBackup Troubleshooting Guide.
9. The third-party copy device reads the backup image (client data) from two places:
from the clients active data (for those blocks that have not changed since the backup
was initiated), and from the snapshot cache (for the original contents of the blocks that
have changed since the backup was initiated).
10. The third-party copy device sends the backup image to the storage device.
Media server
Fil
ei
bpcd 2
nfo
rm
ati
o
bpcd
n
3
bpbrm F ile 4
infor
mati
on
bptm
bpbkar
Extent list 7 Make
Tap
e req extent snapshot
ues
t list info
8 ltid
mapping snapshot
Third-party 5
services services
copy command mount
with extent list
6 snapshot
File/volume created
10
mapping info
e
mag
ckup i Tape or optical
Ba
device
Third-party copy snapshot Active
device of client data client data
Backup image
9
Mirror (secondary) Primary
1. The NetBackup server or client initiates the backup, causing the NetBackup request
daemon bprd to start the scheduler, bpsched. bpsched processes the policy
configurations depending on the initiator of the backup (scheduled, immediate
manual, or user directed). Refer to Appendix A of the NetBackup Troubleshooting Guide for
more information on this stage of the backup operation.
2. bpsched uses bpcd (client daemon) to start the backup/restore manager (bpbrm) on
the media server.
3. bpbrm starts the Media Manager process bptm and also starts the actual backup by
using bpcd on the client to start the clients backup and archive program bpbkar.
4. bpbkar sends information about files within the image to the backup/restore
manager bpbrm, which directs the file information to the NetBackup file database on
the master server.
6. bpbkar requests file/volume mapping information about the client data. bpbkar
uses one or more mapping services to decompose the clients data into physical disk
addresses (also referred to as disk extents). This file/volume mapping information (list
of extents) comes from the snapshot of the client data on the mirror (secondary) disk.
8. bptm sends the third-party copy command with the extent list to the third-party copy
device. For information on how the tape request is issued, refer to Appendix A of the
NetBackup Troubleshooting Guide.
9. The third-party copy device reads the backup image (client data) from the snapshot
on the mirror (secondary) disk.
10. The third-party copy device sends the backup image to the storage device.
Backup 1
Backup
Request
Admin bprd Request user
Console interface
bpsched
bpdbm
2
Fi
le bpcd
inf bpcd
or
m
at 3
ion
bpbrm File
6 in form
a tion 4
bpfis
Make
snapshot
snapshot 5
services
snapshot
captured
snapshot of
client data
1. The NetBackup master server or client initiates the backup, causing the NetBackup
request daemon bprd to start the scheduler, bpsched. bpsched processes the policy
configurations depending on the initiator of the backup (scheduled, immediate
manual, or user directed). Refer to Appendix A of the NetBackup Troubleshooting Guide for
more information on this stage of the backup operation.
2. bpsched uses bpcd (client daemon) to start the backup/restore manager (bpbrm) on
the master server.
3. bpbrm starts the backup by using bpcd on the client to start the clients backup and
archive program bpbkar.
4. bpfis requests creation of a snapshot of the clients active data. bpbkar uses the
snapshot method that was configured for the snapshot source.
5. bpfis captures the snapshot, mounts it, sends catalog information, then unmounts
the snapshot.
6. bpfis sends information about files within the snapshot to the backup/restore
manager bpbrm, which directs the file information to bpdbm for the NetBackup file
database on the master server.
1
bpsched
2
bpbrm bpfis
bpdbm
Snapshot of
client data
Catalog
1. bpsched receives and validates the policy attribute Retain snapshots for instant
recovery and schedule attribute Instant recovery backups to disk only to
determine what type of backup should be started.
2. In either case, bpsched calls bpbrm which in turn invokes bpfis to create the necessary
snapshots.
3. For Instant Recovery backups to disk only, bpfis creates the catalog entries.
1
bpsched bpbrm bpfis
4 3
Snapshot of
client data
bptm bpbkar
5
2. bpfis creates snapshot and returns the modified backup selections list to bpbrm and
bpsched.
2 Restore 1
Request
bprd Backup, Archive,
Restore
bpcd 3
5
bpbrm bppfi
4
6
Persistent
bpbkar snapshot of
client data
8 7
2. A handler for the restore request is started by the request service (bprd) on the master
server. The restore request handler uses catalog information to produce a detailed file
selection list, identifying which files are to be restored from which image. For each
image, a restore request is sent to the server.
3. The client request service (bpcd) on the server starts a backup and restore manager
(bpbrm) to process the restore.
4. bpbrm starts two agents on the client: tar, and bppfi (the Instant Recovery agent).
5. bppfi connects back to the restore request handler (bprd) on the master server to
collect the list of files to be restored from this image. It also connects to tar to pass the
connection to the backup agent (bpbkar).
6. bppfi then starts the backup agent bpbkar and sends it the Backup Selections list.
7. tar consumes the data stream generated by bpbkar and restores the contents to the
clients original file system.
8. When tar completes the restore, it notifies the backup and restore manager (bpbrm) of
the completion. bpbrm notifies the restore request handler, and all processes
associated with the restore end.
1 Browsing
2 for Restore
bplist Backup, Archive,
bprd
Restore
3
bpdbm
5
6 bpcd Snapshot of
4 client data
NetBackup
Catalog
1. User interface invokes bplist to display files during browse for restore.
2. bplist sends request to bprd with browsing criteria, such as time, backup id, and file
path.
4. bpdbm searches the catalog. If it is a backup for Instant Recovery with a partial .f file
entry created by a non-database Instant Recovery backup, it starts bpcd and sends the
fragments that contain snapshot names, mount point, and mount device to bpcd.
5. bpcd decodes the fragments and mounts the snapshot(s) specified in the fragments.
Restore 1
Request
bprd Backup, Archive,
Restore
2
6
bpbrm
3
bppfi
bpbkar tar
7
5
2. bprd starts bpbrm and sends Backup Selections list and associated snapshot
information.
3. bpbrm starts bppfi on the client and passes the Backup Selections and snapshot
information to it.
4. bppfi mounts the snapshot, starts bpbkar, and sends Backup Selections to bpbkar.
5. bpbkar goes through each of the files, promoting each file (if possible) from the
snapshot to the clients primary file system.
7. bpbkar sends a backup image of the file to tar, and tar performs a regular restore.
Restore 1
Request
bprd Backup, Archive,
Restore
2
3
bpbrm bppfi
4
5
2. bprd starts bpbrm and sends the backup id and associated snapshot information.
3. bpbrm starts bppfi on the client and passes the backup id and snapshot information to
bppfi.
5. bpfis unmounts the primary file system, rolls back the snapshot, and remounts the file
system.
Master server
Backup Client
Request
Admin bprd
Console
Backup
Request
ifrfr
bpdbm
bpsched
info
cku
pi
nfo
bpcd
Policy
info icy bpcd
Pol
info Policy
bptm info
(parent) bpbrm Backup
in fo
Backup Bac bpbkar
kup
image ima bptm e
ge
(child) Backup imag
Restore Restore
Backup, Archive, request request
bprd Backup, Archive,
Restore
Restore
Backup
info
bpdbm
sfr
Media server ap
m
em
yst
s
le
Fi fo bpcd
r in
fe
ans
tr
le
Fi
bptm
(parent) bpbrm bpcd
D at
ab
lock
s
bptm
(child)
Data blocks tar
Tape or Optical
Client
disk
During a restore, the user browses for and selects files from FlashBackup images in the
same manner as standard backup images. The difference is that bprd calls the single file
restore (sfr) program when processing FlashBackup images. During a restore, the sfr
program performs the following tasks:
Retrieves information about the backup from bpdbm. The backup information is
composed of:
File system map name and its media location.
Bit map name and its media location.
Raw-partition name and its media location.
Using the backup information, sfr retrieves the file system map data by directing
bptm to the location of the file system map on the tape and then reading from the
tape. A similar procedure is followed to position bptm to obtain the bit map data
when restoring from incremental backup images and to obtain the raw-partition data
when restoring the raw partition.
Then, using the information contained in the file system map, sfr directs bptm to the
location of the individual file data blocks on the tape.
bptm then reads the file data blocks from the tape and writes them to the clients tar
program. Finally, tar writes the file data to the client disk.
Copy-on-write process
Source data
1 Image of source data is
frozen; copy-on-write is
s0 s1 s2 s3 s4 s5 s6 s7 s8 s9 s10 activated
Writes delayed
2 New write requests to s4,
s7, s8 are held by
copy-on-write process
(see arrows)
s0 s1 s2 s3 s4 s5 s6 s7 s8 s9 s10
Result of copy-on-write
process
s0 s1 s2 s3 s4 s5 s6 s7 s8 s9 s10
The immediate results of the copy-on-write are the following: a cached copy of those
portions of the source that were about to change at a certain moment (see step 3 above),
and a record of where those cached portions (blocks) are stored ( 4 ).
The copy-on-write does not produce a copy of the source; it creates cached copies of the
blocks that have changed and a record of their location. The backup process refers to the
source data or cached data as directed by the copy-on-write process (see next diagram).
Cache
2 At s4, copy-on-write
tells backup to read
c0 instead of s4
c0 c1 c2 c3 c4
Cache
4 At s7 and s8,
copy-on-write tells
c0 c1 c2 c3 c4 backup to read c1,
c2 instead of s7, s8.
5 Backup continues
reading source or
cache, as directed
by copy-on-write.
6 When backup
completes, backup
data is identical to
original source.
Backup image
s0 s1 s2 s3 c0 s5 s6 c1 c2 s9 s10
As shown in the above diagram, an accurate backup image is obtained by combining the
unchanged portions of the data with the cache. When a backup of the snapshot begins, the
backup application copies the source data 1 until it comes to a block that changed after
the copy-on-write process was activated. The copy-on-write tells the backup to skip that
changed block and read in its place the cached (original) copy 2 . The backup application
continues copying source data 3 until it comes to another changed block. Cache is read
again 4 as the copy-on-write process dictates. The backup, when finished, is an exact
copy of the source as it existed the moment the copy-on-write was activated.
This appendix describes commands for managing backups that use the nbu_snap method.
253
Terminating nbu_snap
NetBackup ordinarily removes snapshots after the Advanced Client backup completes.
However, as a result of certain kinds of system failures, such as a system crash or
abnormal termination of the backup, the snapshot may not have been removed.
Use the snapoff command to terminate an nbu_snap snapshot that was not terminated
by the backup job (see snapoff on page 258). For more information on terminating
snapshots, see Removing a Snapshot on page 206.
nbu_snap Commands
The following commands relate to the nbu_snap snapshot method.
snapon
snapon starts an nbu_snap snapshot (copy-on-write).
Execute this command as root:
/usr/openv/netbackup/bin/driver/snapon snapshot_source cache
where snapshot_source is the partition on which the clients file system (the file system to
be snapped) is mounted, and cache is the raw partition to be used as copy-on-write
cache.
Example 1:
/usr/openv/netbackup/bin/driver/snapon /var /dev/rdsk/c2t0d0s3
Example 2:
/usr/openv/netbackup/bin/driver/snapon /dev/vx/rdsk/omo/tcp1
/dev/vx/rdsk/omo/sncache
Note The snapshot is created on disk, and remains active until it is removed with the
snapoff command or the system is rebooted.
snaplist
snaplist shows the amount of client write activity that occurred during an nbu_snap
snapshot. Information is displayed for all snapshots that are currently active.
Execute this command as root:
/usr/openv/netbackup/bin/driver/snaplist
Note To see the total space used in a particular cache partition, use the snapcachelist
command.
minblk
In the partition on which the file system is mounted, minblk shows the lowest
numbered block that is currently being monitored for write activity while the
snapshot is active. minblk is used by FlashBackup policies only.
err
An error code; 0 indicates no error.
If a snapshot has encountered an error, the err will be non-zero and the snapshot will
be inaccessible. It can be terminated using snapoff and the snapshot ID. Error codes
are identified in /usr/include/sys/errno.h. Also, error messages may be found
in /var/adm/messages.
time
The time at which the snapshot was started.
device
The raw partition containing the clients file system data to back up (snapshot source).
cache
The raw partition used as cache by the copy-on-write snapshot process.
Note Make sure this partition is large enough to store all the blocks likely to be changed
by user activity during the backup. To determine the total space used in a particular
cache partition by all active snapshots, use the snapcachelist command.
snapcachelist
snapcachelist displays information about all partitions currently in use as nbu_snap
caches. This command shows the extent to which the caches are full.
Execute this command as root:
/usr/openv/netbackup/bin/driver/snapcachelist
Description:
device
The raw partition being used as cache.
free
The number of 512-byte blocks unused in the cache partition.
busy
The number of 512-byte blocks in the client data that changed while the snapshot was
active. Prior to being changed, these blocks were copied to the cache partition by the
nbu_snap copy-on-write process. For each cache device listed, busy shows the total
space that was used in the cache. You can use this value as a sizing guide when setting
up raw partitions for nbu_snap cache.
When a cache is full, any additional change to the client data will cause the
copy-on-write to fail and the snapshot will no longer be readable or writable. Reads or
writes to the client data will continue (that is, user activity will be unaffected). The
failed snapshot, however, will not be terminated automatically and must be
terminated using snapoff.
snapstat
snapstat displays diagnostic information about the snap driver.
Execute this command as root:
/usr/openv/netbackup/bin/driver/snapstat
snapoff
snapoff terminates an nbu_snap snapshot.
Execute this command as root:
/usr/openv/netbackup/bin/driver/snapoff snap1 ... snapn
where snapx is the numeric id of each copy-on-write process to be terminated. (Use
snaplist to display the id of active snapshots.)
If snapoff is successful, a message of the following form will be displayed:
snapshot 1 disabled
snapshot 2 disabled
...
snapshot n disabled
Caution Do not terminate a snapshot while the backup is active, because corruption of
the backup image may result.
creating 75 configuring 96
restriction 119
A
restrictions 96
a=wwpn
testing setup 144, 148
lun 62
alternate world-wide name 63
259
manual 182
bppfi log on client 200
of copy-on-write 251
bpps 35
offhost
bprd 35
configuration 86
bprd log 201
prerequisites 178
bprdreq 35
SCSI offhost 16
address info in 3pc.conf 62
techniques (overview) 4
bpsched 227, 229, 231, 233, 235, 237
user-directed 182
bptm log 69, 200, 201
backup agent 2
bptpcinfo command 48, 49, 55, 75, 160
100, 125
C
backup scripts 95
c=client 62
cache
overflow 254
directives 108
FlashBackup 105
requirements 134
symbolic link 99
size 254
BCV 153
BLIB 84
Chaparral 68
checkpoint
client data
243
process 205
clients
installing software 35
bpdbm 248
clone
examples 215
clustering 35
configuration
bpmoverinfo command 77
device
fast mirror resynch 142
ID 62
path 62
FastResync (defined) 27
recognition 54
feature/requirements table 18
Index 261
restore-process diagram 243, 244
hr keyword 70, 72
file systems
I/O components
defined 27
supported 102
quiescing 222
i=iddesc
i=reserve value 71
FlashBackup 2, 254
identification descriptor 64
features 102
restoring 183
inetd 162
FlashSnap 24
insf command 54
install script 34
installation
clients 35
deinstallation 40
directories 34
system) 184
of Advanced Client 34
fsckptadm 30
reinstallation 35
upgrade (patch) 35
get_license_key 34
Instant Recovery
GID 96
defined 27
glossary of terms 26
restore-process diagram 240, 243, 244
H
instant snapshots 92, 128
HBA drivers 59
J
Hitachi 68, 91
JBOD array 54
HOMRCF 91
jbpSA 187
HORCM_CMD 160
jnbSA 85, 90, 95, 103, 122
HORCM_DEV 160
K
HORCM_INST 161
HORCM_MON 159
HORCMINST 163
HP 68, 108
Index 263
no-data Storage Checkpoint 119 point-in-time snapshots 4
policy
O
how to select type of 86
203
NAS 9
poll 160
overview 9
port name
SCSI connections 16
without SAN 16
overwriting
R
raw partition restores 183
P=clientpath 62
raw partition 105, 108
p=devpath 62
as snapshot source 99
partitions
restore 183
Windows 102
specifying for cache 134
patches required 24
and FlashBackup 102
performance
clone 211
restore) 185
replication (for alternate client backup) 147
platform requirements 24
reserve/release
platforms supported 33
dr keyword 72
introduction 69
second world-wide name 63
S
choose) 5
s=sn 62
defined 29
defined 28
methods
Index 265
volume (creating) 144, 146
synchronize disks 208
VxVM instant 92
Synchronize mirror before backup 94
Snapshot Volume
System State 102
defined 29
and FlashBackup 102
snapshots 4
system-protected files and FlashBackup 102
T
software required (table) 18
tape
software upgrade 35
Solaris
TAPE keyword 71
version requirements 24
tape reserve 69
tape-config-list 57
SPC 55
Spectra Logic 68
terminate bprd 35
st.conf 53, 57
terminology 26
Standard policy
third-party copy 9
restoring 183
defined 30
defined 26
storage checkpoint 92
Storage Checkpoints 24
active 47
storage devices 84
defined 30
FIXED mode 53
75
passive 46
SCSI reserve 69
supported
backup 104
supported platforms 33
switched fabric 27
SYMAPI 91
to keyword 72
symbolic links 99
troubleshooting 200
type
creating 157
UID 96
VxFS_Checkpoint method 92, 102, 123, 138,
uninstalling NetBackup 40
vxmake 128
114
vxprint command 207
unquiesce 221
vxsnap 129
UNSET 110
VxVM 174
UNSET_ALL 110
clone of disk group 170, 211
upgrade software 35
disk group 168, 172
user-directed
mirror 22, 141
archives 182
required version 23, 91, 97, 141
backup 182
restrictions re arrays 168
Instant Recovery
verbose mode (bptpcinfo) 75
volume, defined 30
preparing for Instant Recovery 126
volumes
vxvol command 141, 144
multi-device 138
W
VSP method
W=wwpn 63
description 91
w=wwpn 63
VSS method 92
info xix
Windows
OS partitions 102
vxassist 128
wizard
Index 267