Enterprise Backup Solution Design Guide
Enterprise Backup Solution Design Guide
1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
Supported components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. . .. ....... 13
Supported topologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. . .. ....... 13
Serial-attach SCSI (SAS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. . .. ....... 13
Direct-attach SCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. . .. ....... 13
Direct-attach SAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. . .. ....... 14
Point-to-point . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. . .. ....... 14
Switched fabric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. . .. ....... 14
Platform and operating system support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. . .. ....... 14
Use of native backup programs and commands . . . . . . . . . . . . . . . . . . . . . . . . . .. . .. .. . .. ....... 14
2 Hardware setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
Components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
HP StorageWorks Secure Key Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
The Secure Key Manager features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Configuration preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
Getting help . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
HP StorageWorks ESL E-Series tape libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
SCSI or FC connections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
Setting the bar code length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
Creating multi-unit ESL E-Series tape libraries using the Cross Link Kit . . . . . . . . . . . . . . . . . . . . . . . . . 23
Check connectivity and performance with L&TT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
HP StorageWorks EML E-series tape libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Cabling and configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Setting the bar code front panel display and host reporting configuration . . . . . . . . . . . . . . . . . . . . . . 31
Check connectivity and performance with L&TT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
HP StorageWorks MSL5000 and MSL6000 Series tape libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Connecting the MSL5000/6000 tape library . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
MSL5000/6000 Series library with Fibre Channel routers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
MSL5000/6000 Series library with direct-attached SCSI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
Creating a multi-stack unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
Setting the bar code length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
Check connectivity and performance with L&TT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
HP StorageWorks MSL2024, MSL4048, and MSL8096 Series Fibre Channel (FC) tape libraries . . . . . . . . 38
Back panel overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
Configuring drive information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Viewing drive status information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
SAN connectivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Native Fibre Channel drives (NFC). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Network Storage Routers (NSR) N1200-320 4Gb . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Setting the bar code length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
Check connectivity and performance with L&TT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
The HP StorageWorks 6000 Virtual Library System (VLS). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4
Secure Manager mapping rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
Single Fibre Channel port example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
Basic Secure Manager and manual mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Interface Manager card problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
Fibre Channel interface controller and Network Storage Router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Common configuration settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
Indexed maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Port 0 device maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Auto assigned maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
SCC maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
Network Storage Routers have limited initiators for single- and dual-port routers . . . . . . . . . . . . . . . . . 95
Fibre Channel switches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
HP StorageWorks 4/8 San Switch and HP StorageWorks 4/16 San Switch—file system full resolution . . . 97
EBS and the multi-protocol router . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Fibre Channel host bus adapters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
HBAs and performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
Third-party Fibre Channel HBAs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
HP StorageWorks 3Gb SAS BL Switch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
Important tips . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
RAID array storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Raid arrays and performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Third-party RAID array storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
EBS power on sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
Using HP StorageWorks Library and Tape Tools (L&TT) to verify disk system data performance . . . . . . . . 101
3 Zoning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Emulated private loop . . . . . . . .. .. . .. . .. .. . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . 103
Increased security . . . . . . . . . . .. .. . .. . .. .. . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . 103
Optimized resources . . . . . . . . .. .. . .. . .. .. . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . 103
Customized environments . . . . .. .. . .. . .. .. . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . 104
Zoning components . . . . . . . . .. .. . .. . .. .. . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . 104
EBS zoning recommendations . .. .. . .. . .. .. . .. .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .... . . . 104
6
6 Management tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
HP Storage Essentials Storage Resource Management Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
Features and benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
HP Systems Insight Manager (HP SIM). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Features:. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Key benefits: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
Management agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Known issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Recommendations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
8
About this guide
This guide provides information to design an HP StorageWorks Enterprise Backup Solution (EBS).
Intended audience
This guide is intended for system administrators implementing an EBS that have experience with the
following:
• Tape backup technologies, tape libraries, and backup software
• SAN environments
• Fibre Channel
Prerequisites
Before installing the EBS hardware, consider the items below:
• Review the HP Enterprise Backup Solutions Compatibility Matrix located at http://www.hp.com/go/ebs
to ensure that the components selected are listed
• Knowledge of the operating system(s)
• Knowledge of the EBS hardware components listed in Chapter 1
• Knowledge of switch zoning and selective storage presentation
Related documentation
In addition to this guide, HP provides relevant information:
• Implementation matrix for supported backup applications
• Installation guides for EBS hardware components
Convention Element
Medium blue text: Figure 1 Cross-reference links and e-mail addresses
• GUI elements that are clicked or selected, such as menu and list
items, buttons, and check boxes
• System output
• Code
• Command-line variables
Monospace, bold font Emphasis of file and directory names, system output, code, and text
typed at the command line
CAUTION: Indicates that failure to follow directions could result in damage to equipment or data.
Rack stability
HP technical support
Telephone numbers for worldwide technical support are listed on the HP support website:
http://www.hp.com/support/.
Collect the following information before calling:
• Technical support registration number (if applicable)
• Product serial numbers
• Product model names and numbers
• Applicable error messages
• Operating system type and revision level
• Detailed, specific questions
For continuous quality improvement, calls may be recorded or monitored.
HP strongly recommends that customers sign up online using the Subscriber's choice website:
http://www.hp.com/go/e-updates.
• Subscribing to this service provides e-mail updates of the latest product enhancements, drivers, and
firmware documentation updates.
• Signing up allows easy access to your products by selecting Business support and then Storage under
Product Category.
10
HP-authorized reseller
For the name of the nearest HP-authorized reseller:
• In the United States, call 1-800-345-1518.
• Elsewhere, see the HP website: http://www.hp.com. Click Contact HP to find locations and telephone
numbers.
Helpful websites
For support and product information, see the following HP websites:
• http://www.hp.com
• http://www.hp.com/go/storage
• http://www.hp.com/support/
• http://www.docs.hp.com
• http://www.hp.com/support/tapetools
• http://www.hp.com/support/manuals
Supported components
For complete EBS configuration support information, refer to the HP Enterprise Backup Solutions
Compatibility Matrix located at: http://www.hp.com/go/ebs.
Supported topologies
A Fibre Channel SAN supports several network topologies, including point-to-point and switched fabric.
These configurations are constructed using switches and routers.
Serial-attach SCSI (SAS)
The Serial-attach SCSI (SAS) interface is the successor technology to the parallel SCSI interface, designed
to bridge the gap in performance, scalability, and affordability. SAS combines high-end features from fibre
channel (such as multi-initiator support and full-duplex communication) and the physical interface
leveraged from SATA (for better compatibility and investment protection), with the performance, reliability,
and ease-of-use of traditional SCSI technology.
Direct-attach SCSI
Direct-attach SCSI (DAS) is the most common form of attachment to both disk and tape drives. Direct-attach
SCSI allows a single server to communicate directly to the given target device over a SCSI cable. These
configurations do not allow for multi-hosting a single target device, because the target device is dedicated
to the server. These configurations are not covered in this document.
NOTE: See Figure 49, Figure 50, and Figure 51 for an example of basic switched fabric, point-to-point,
and direct-attached SCSI configurations.
14 Overview
The following table shows the native utilities tested on each Operating System:
Mt Yes Yes No
NT backup No No Yes
Mc Yes No No
Mtx No Yes No
RSM No No Yes
HP-UX 11.11 or higher, Linux Red Hat EL 2.1 or higher, Linux SUSE SLES 8 or higher, and Windows Server
2000 or higher are tested. Current tape drive and autoloader (library) drivers are located at
http://www.hp.com.
NOTE: For a complete listing of supported servers and hardware, refer to the HP Enterprise Backup
Solutions Compatibility Matrix at http://www.hp.com/go/ebs.
Component Description
Host bus adapter Host bus adapters (HBAs) are used to connect servers to Fibre Channel topologies. They
provide a similar function to SCSI host bus adapters or network interface cards (NIC).
The device driver for an HBA is typically responsible for providing support for any of the
Fibre Channel topologies—point-to-point, loop, or fabric. In most cases, the device driver
also provides a translation function of presenting Fibre Channel targets as SCSI devices to
the operating system. This provides compatibility with existing storage applications and file
systems that were developed for SCSI devices.
Switch Switches are the Fibre Channel infrastructure component used to construct fabrics. Switches
may be cascaded together to configure larger fabrics.
Switches typically have an Ethernet port for managing them over the network. This port
provides status and configuration for the switch and individual ports.
Tape library/VLS The tape library or VLS provides the nearline storage for backup on the SAN. The tape
library provides automated tape handling which becomes a key requirement when
consolidating backup across multiple servers.
Fibre Channel The controller (also referred to as a bridge or router) device provides connection between
interface controller Fibre Channel networks and SCSI tape and robotic devices. This device is similar to a Fibre
Channel disk controller for RAID subsystems. The controller acts as an interface to the SCSI
device, and can send or receive SCSI commands through encapsulated Fibre Channel
frames.
Fibre Channel The Interface Manager card, in conjunction with HP StorageWorks Command View TL
interface manager software, provides remote management of the library via a serial, telnet, or web-based GUI
interface.
Cables and SFPs Three types of cables exist to connect Fibre Channel devices together—copper cables,
short-wave or multi-mode optical cables, and long-wave or single-mode optical cables. Each
type of cable provides different maximum lengths, as well as cost.
Fibre Channel devices have ports which either require a specific type of cable, or require a
separate module referred to as an SFP (Small Form-factor Pluggable). An SFP-based port
allows the customer to use any type of cable by using the appropriate type of SFP with it. For
example, Fibre Channel ports use fibre-optic SFP modules with LC connectors.
Data protection Data protection software is deployed on each of the hosts on a SAN that will perform
software backup. This typically requires installing server-type licenses and software on each of these
hosts. Many of these backup applications also provide a separate module or option, which
enables software to manage shared access to the tape drives on a SAN. This may need to
be purchased in addition to the typical software licenses.
Component Description
SAN management SAN Management software is used to manage resources, security, and functionality on a
software SAN. This can be integrated with host-based device management utilities or embedded
management functionality such as switch Ethernet ports.
HP StorageWorks L&TT is a robust diagnostic tool for tape storage and magneto-optical storage
Library and Tape products. L&TT provides functionality for firmware downloads, verification of device
Tools (L&TT) operation, maintenance procedures, failure analysis, corrective service actions,
and a range of utility functions.
Performance tools assist in troubleshooting backup and restore issues in the overall
system. It also provides seamless integration with HP's hardware support
organization by generating and e-mailing support tickets. It is ideal for customers
who want to verify their installation, ensure product reliability, self-diagnostics, and
faster resolution of tape device issues.
Ensure that LTT is installed on the backup servers and is ready to use, should there
be a need to contact HP support.
18 Hardware setup
HP StorageWorks Secure Key Manager
The HP StorageWorks Secure Key Manager reduces your risk of a costly data breach and reputation
damage while improving regulatory compliance with a secure centralized encryption key management
solution for HP LTO4 enterprise tape libraries. The Secure Key Manager automates key generation and
management based on security policies for multiple libraries. This occurs transparent to ISV backup
applications. The Secure Key Manager is a hardened server appliance delivering secure identity-based
access, administration, and logging with strong auditable security designed to meet the rigorous Federal
Information Processing Standard (FIPS) 140-2 security standards. Additionally, the Secure Key Manager
provides reliable lifetime key archival with automatic multi-site key replication, high availability clustering
and failover capabilities.
The HP StorageWorks Secure Key Manager provides centralized key management for HP StorageWorks
Enterprise Storage Libraries (ESL) E-Series Tape Libraries and HP StorageWorks Enterprise Modular Library
(EML) E-Series Tape Libraries. In addition to the clustering capability, the Secure Key Manager provides
comprehensive backup and restore functionality for keys, as well as redundant device components and
active alerts. The Secure Key Manager supports policy granularity ranging from a key per library partition
to a key per tape cartridge while featuring an open extensible architecture for emerging standards and
allowing additional client types in the future needing key management services. These clients may include
other storage devices, switches, operating systems and applications. Keep your confidential data secure
yet highly available with automated single point of management for your encryption keys using the HP
Secure Key Manager, a member of the "HP Secure Advantage" portfolio.
Configuration preparation
To prepare to configure the system, have ready all information listed on the pre-install survey. This
information was gathered by your site Security Officer and the HP installation team before the system was
shipped; if it has been lost, obtain the form from the appendix of the HP StorageWorks Secure Key
Manager users guide and complete it. If portions of this information are inaccurate or unknown, the
installation will be incomplete and data encryption can not occur.
See the "HP StorageWorks Secure Key Manager Installation and replacement" for completed details on
the configuration and installation of the SKM. Also, check the EBS Compatibility Matrix at
http://www.hp.com/go/ebs for compatibility and any additional notes when using the SKM with data
protection application software.
Getting help
If you cannot find the information that you need in this overview, there are several other resources that you
can use to get more detailed information.
• The HP StorageWorks Secure Key Manager users guide on the documentation CD
• The HP website, http://www.hp.com
• Your nearest HP authorized reseller (locations and telephone numbers of these resellers are given on the
HP website)
• HP technical support telephone numbers:
• In North America, 1–800–633–3600
• For other regions, telephone numbers are given on the HP website.
20 Hardware setup
HP StorageWorks ESL E-Series tape libraries
The HP StorageWorks ESL E-Series enterprise tape library scales up to 712 LTO or 630 SDLT cartridge slots
in a single library frame, and up to 3546 LTO slots or 3138 SDLT slots in a multi-frame library. Offered with
Ultrium 1840, Ultrium 960, Ultrium 460, SDLT 600, and SDLT 320 tape technologies, the ESL E-Series
offers storage density of up to 56.8 terabytes per square foot of floor space.
Each single library frame may contain up to 24 tape drives (four drives per drive cluster) as shown in
Figure 1. Each library frame must contain at least one drive cluster. In a dual frame library, where two
frames are joined into a single library using the Cross-Link Mechanism, the first frame may contain up to
20 drives, and the second frame may contain up to 24 drives. In libraries with more that two frames, see
Figure 4 for details on numbers of drives and slots supported.
B A 2 1
D C 4 3
B A 6 5
D C 8 7
B A 10 9
D C 12 11
B A 14 13
D C 16 15
B A 18 17
D C 20 19
B A 22 21
D C 24 23
SCSI or FC connections
NOTE: Many of the figures in this chapter show a library with SCSI drives and e2400-160 interface
controllers. Your library may look different and could exclude the e1200–160 robotics controller card,
include Ultrium 460-FC and 960-FC drives, and include e2400-FC 2G and 4G interface controllers.
22 Hardware setup
Setting the bar code length
To change the bar code length:
1. Access the front panel display and select Menu.
2. Select Setup.
3. Enter the password.
4. Scroll down to bar code length and enter the desired length. The default length is 6.
Creating multi-unit ESL E-Series tape libraries using the Cross Link Kit
HP StorageWorks ESL Cross Link Kit connects up to five 712e or 630e tape libraries together as a single
tape library to scale up to 3546 LTO slots (1418 TB) or 3138 SDLT slots (941 TB) and up to 44 tape drives.
The ESL E-Series Cross Link Kit requires specific software and firmware supplied on CD with the Cross Link
Kit. Software licenses are required only on the first or primary library frame.
Small count tape libraries (322e, 286e) are only supported if upgraded to a fully populated tape library,
and only if they are the first or primary tape library.
NOTE: The Cross Link Kit requires removal of one cluster of either tape drives or slots along the back wall
of the first ESL cabinet. In an LTO library, the back wall clusters hold 14 slots. In an SDLT library, the back
wall clusters hold 12 slots.
Figure 3 ESL E-Series tape libraries using the Cross Link Kit
NOTE: HP recommends leaving L&TT installed on the host so it is readily available if there is a need to run
diagnostics if directed by HP support at a later date. It has no background services and no impact on the
host or other applications.
24 Hardware setup
HP StorageWorks EML E-series tape libraries
The HP StorageWorks EML E-series library is available in two models—the 103e, which is a 12U
rack-mounted design containing up to four tape drives and 103 LTO tape slots, and the 245e, which is a
24U rack-mounted design holding up to eight drives and 245 LTO slots. Upgrade kits are available to fill
out a 42U rack with a configuration that can contain up to 442 LTO slots and 16 tape drives.
The Fibre-Channel LTO drives are connected to a SAN fabric via e2400-FC 2G/4G interface controllers,
or directly to the SAN in the case of Ultrium 1840 LTO4 drives, which are in turn managed by the HP
StorageWorks Interface Manager card. HP StorageWorks Command View TL is used to communicate with
the Interface Manager via web browser for configuration.
8 9
2
7 10
3 12
1 6
5
12 11
3
6
12 13
4
12
14
15
10447
8
4
6 8
2 4 5 9
8
7
4 8
3 4 10
11
10448
26 Hardware setup
3
7
9
2
7
8 10
3 7 11
4 7 10
3 7 11
5 7 11
10967
28 Hardware setup
1
7
2
8
11
12
9
3 13
10
4 12 9 10
12 13
5 11
4 12 9 10
14
6 10
10968
LAN
Ethernet
SAN
Fibre Channel
30 Hardware setup
Setting the bar code front panel display and host reporting configuration
HP StorageWorks EML E-Series tape libraries bar code reporting can be configured as six to eight
characters and left or right aligned. If six characters with left alignment is chosen, any characters after the
sixth are truncated. With six characters and right alignment, only the last six characters are shown with the
beginning characters truncated.
The LTO labels have L1, L2, L3, and L4 as the media identifiers for the respective LTO1, LTO2, LTO3, and
LTO4 cartridges. All cleaning cartridges should use the format CLNxxxL1 type of label, where xxx is a
number between 000 and 999 for all types of LTO drives. WORM tape cartridges for LTO3 and LTO4
have media identifiers of LT. The length and justification of the bar code reporting format, as sent to the
host and as viewed on the front panel, can be configured through the front panel configuration section.
To set the bar code front panel display and host reporting configuration on the EML library:
1. Access the EML front panel top display and select the Configuration tab.
2. Select the Library Configuration tab and enter the password (the default password for EML is 112233).
3. Select the Configure bar code reporting format tab.
4. Select Format for Front Panel and configure according to the requirements listed above.
5. Select Format for Host Reporting and configure according to the requirements listed above.
NOTE: HP recommends leaving L&TT installed on the host so it is readily available if there is a need to run
diagnostics if directed by HP support at a later date. It has no background services and no impact on the
host or other applications.
WARNING! Make sure the power to each component is off and the power cords are unplugged before
making any connections.
NOTE: Read the documentation included with each component for additional operating instructions
before installing.
32 Hardware setup
MSL5000/6000 Series library with direct-attached SCSI
Figure 12 shows a typical SCSI cable configuration for a library with two tape drives installed using
multi-host systems or multiple SCSI HBAs.
Data bits: 8
Stop bits: 1
Parity: None
NOTE: Before initially applying power to the library, make sure all the FC devices are powered on first,
and that they have finished performing individual self tests. This helps to ensure that device discovery works
correctly.
5. Apply power to the tape library. The power-on process can take up to 90 seconds. Once complete, the
main menu should be accessible.
SCSI bus configuration
The interface card provides the capability to reset SCSI buses during the interface card boot cycle. This
allows the devices on a SCSI bus to be set to a known state. Configuration provides for the SCSI bus reset
feature to be enabled or disabled.
The interface card negotiates for the maximum values for transfer rates and bandwidth on a SCSI bus. If an
attached SCSI device does not allow the full rates, the interface card uses the best rate it can negotiate for
that device. Negotiation is on a device specific basis, so the unit can support a mix of SCSI device types
on the same SCSI bus.
FC port configuration
By default, the configuration of the FC port on the interface card is set to N_Port mode.
FC arbitrated loop addressing
On a Fibre Channel Arbitrated Loop, each device appears as an Arbitrated Loop Physical Address
(AL_PA). To obtain an AL_PA, two addressing methods, called soft and hard addressing, can be used by
the interface card. Soft addressing is the default setting. For hard addressing, the user specifies the AL_PA
of the interface card.
34 Hardware setup
Soft addressing
When acquiring a soft address, the interface card acquires the first available loop address, starting from
address 01 and moving up the list of available AL_PAs in the chart from 01 to EF. In this mode, the
interface card obtains an available address automatically and then participates on the FC loop, as long as
there is at least one address available on the loop connected to the interface card. Fibre Channel supports
up to 126 devices on an Arbitrated Loop.
Hard addressing
When acquiring a hard address, the interface card attempts to acquire the AL_PA value specified by the
user in the configuration settings. If the desired address is not available at loop initialization time, the
interface card comes up on the FC loop using an available soft address. This allows both the loop and the
unit to continue to operate. An example of this scenario would be when another device on the Arbitrated
Loop has acquired the same address as that configured on the interface card.
Hard addressing is recommended for FC Arbitrated Loop environments where it is important that the FC
device addresses do not change. Device address changes can affect the mapping represented by the host
operating system to the application, and have adverse effects. An example of this would be a tape library
installation, where the application configuration requires fixed device identification for proper operation.
Hard addressing ensures that the device identification to the application remains constant.
FC switched fabric addressing
When connected to a Fibre Channel switch, the interface card is identified to the switch as a unique device
by the factory programmed World Wide Name (WWN) and the World Wide Port Names (WWPN),
which are derived from the WWN.
Creating a multi-stack unit
The MSL5000 and MSL6000 series libraries can be stacked in a scalable combination with additional
MSL5000 or MSL6000 series libraries to form a multi-unit, rack-mounted configuration. Through use of a
pass-thru mechanism (PTM), all multi-unit libraries in the stack can operate together as a single virtual
library system. Stacked units are interconnected through their rear panel Ethernet connections and an
external Ethernet router mounted to the rack rails, or through an internal Ethernet router installed in a
library expansion slot.
The external Ethernet hub also provides an additional connector when libraries are combined in their
maximum stacked height.
• A maximum of eight libraries can be connected in this manner.
• Any combination of libraries, not exceeding 40U in total stacked height, can also be used.
• A multi-unit library system appears to the host computer system and library control software as a single
library.
• For multi-unit applications, the top library becomes the primary master unit and all other lower libraries
are slave units.
NOTE: The PTM continues to function even if a slave library is physically removed from the rack
configuration during normal library operation.
Figure 15 shows how to connect a multi-unit library configuration using an embedded router card.
36 Hardware setup
Setting the bar code length
1. Access the front panel and select Menu.
2. Select Edit Options.
3. Select Library.
4. Enter password.
5. Scroll down to bar code options and enter the desired values. The default values are as follows:
• bar code label size = 8
• bar code alignment = left Align
• bar code label check digit = Disabled
• bar code reader = Retries enabled
NOTE: HP recommends leaving L&TT installed on the host so it is readily available if there is a need to run
diagnostics if directed by HP support at a later date. It has no background services and no impact on the
host or other applications.
38 Hardware setup
Figure 18 Back panel view of the MSL8096 tape library
Number Description
1 Fibre Channel ports A and B, left to right
2 Fan vent
3 Power connector
4 Tape drive(s)
5 Ethernet port
6 Serial port (For HP Service use only)
7 USB port
8 Pull-out tab containing the serial number and other product information
IMPORTANT: It is important to configure the FC ports correctly before using the tape library.
NOTE: HP recommends cabling Port A only and configuring Port B for Auto Detect on Fibre Speed and
Port Type.
SAN connectivity
HP’s new StorageWorks MSL2024, MSL4048, or MSL8096 tape libraries may be connected to the SAN in
two ways:
1. Native Fibre Channel tape drives integrated into the library for direct SAN connectivity.
2. SCSI tape drives in the library, with a Network Storage Router for SAN connectivity.
This document recommends how to decide which option to use when attaching an MSL2024, MSL4048, or
MSL8096 tape library to a SAN.
One of the most important considerations in data protection is the reliability of the backup and restore
jobs. How the backup SAN is implemented can affect the completion rate and performance of the backup
and restore jobs.
Native Fibre Channel drives (NFC)
Native Fibre Channel drives allow for direct connection into a SAN without an intermediate Network
Storage Router. The NFC drive based library offers the best price-to-performance ratio for integration into a
SAN. The NFC drives can be configured directly using the library Remote Management Utility (RMU). The
RMU allows the administrators to set Port Speed, Port Topology, Addressing Mode and Arbitrated Loop
Physical Address (ALPA).
NOTE: HP supports switched fabric and point-to-point (P2P) topologies, but does not support arbitrated
loop configurations.
The NFC tape drive library is configured into the backup software in the same way as any other HP tape
library by performing a device scan from the backup software. NFC drives and SCSI drives with an
attached NSR can be used together within the same library.
Since Native Fibre Channel drives allow direct connection into a SAN, it is important to be mindful of the
size and/or scope of the SAN in which the library and drives are being attached.
40 Hardware setup
IMPORTANT: HP strongly recommends connecting HP Native Fibre Channel tape libraries into a SAN
only where hosts are relatively homogenous that are grouped within smaller private SANs. Information
regarding component compatibility can be found on the HP Enterprise Backup Solutions Compatibility
Matrix available on the EBS website at: www.hp.com/go/ebs. The matrix is updated monthly.
With this topology, the NFC tape drives are connected directly into the SAN but are not isolated from SAN
traffic such as Target Resets or rogue applications or servers. In a complex or poorly implemented SAN,
those items can cause backup and restore jobs to abort, requiring manual intervention to restart the job
and ensure it completes.
Summary
• Use native FC tape libraries for best price performance in connecting tape to a SAN.
• Use SCSI-based tape libraries with a Network Storage Router when more flexibility and isolation of
tape devices in the SAN is required.
42 Hardware setup
Setting the bar code length
1. Access the front panel.
2. Scroll over to Configuration.
3. Scroll down to bar code Reporting.
4. Enter password (if requested).
5. Scroll down to bar code options and enter the desired values. The default values are as follows:
• bar code label size = 8
• bar code alignment = left align
NOTE: HP recommends leaving L&TT installed on the host so it is readily available if there is a need to run
diagnostics if directed by HP support at a later date. It has no background services and no impact on the
host or other applications.
Features
• Emulates popular tape drives and libraries
• Up to 16 libraries and 64 drives emulated simultaneously within a single virtual library system
• Certified with popular backup software packages through HP StorageWorks Enterprise Backup Solution
• Over 575 MB/s throughput
• Scales capacity and performance
• Data compression
• Hot Swap array drives
• Redundant array power supplies and cooling
• RAID 5
• Mounts in a standard 19-inch rack
44 Hardware setup
The following diagrams show the rack order and cabling configuration of the various VLS systems.
1 Node
2 Disk array 1
3 Disk array 2
1 Disk array 3
2 Disk array 2
3 Node
4 Disk array 0
5 Disk array 1
1 Disk array 7
2 Disk array 6
3 Disk array 5
4 Disk array 4
5 Node
6 Disk array 0
7 Disk array 1
8 Disk array 2
9 Disk array 3
46 Hardware setup
VLS6840 and VLS6870 rack order
10
11
12
13
14
15
16
17
48 Hardware setup
VLS6600 cabling
6
1 2 3 4
6 5 8 7 1
11517
16
15
14
13
12
11
10
5 6 9 10
9
1 14
2 13
4 15
1
3 16
2
7 8 11 12
50 Hardware setup
Check connectivity and performance with L&TT
• Install L&TT on the server and run it to scan and find the tape devices. See ”Library and Tape Tools” on
page 173 for more information.
• Check to ensure that the topology is as expected. L&TT will show the connectivity and topology for each
host HBA and tape device connected.
• Run the L&TT Device Performance test on each device to verify that the host to device data path is
configured for full speed operation. This step requires a write-enabled tape cartridge.
NOTE: HP recommends leaving L&TT installed on the host so it is readily available if there is a need to run
diagnostics if directed by HP support at a later date. It has no background services and no impact on the
host or other applications.
Benefits
Integrating a VLS into your existing storage and backup infrastructure delivers the following benefits:
• Faster backups—Backup speeds are limited by the number of tape drives available to the SAN
hosts. The VLS emulates many more tape drives than are available in physical tape libraries, allowing
more hosts to run backups concurrently. The VLS is optimized for backups and delivers faster
performance than a simple disk-to-disk solution.
• Faster single file restores—A single file can be restored much faster from disk than tape.
• Lower operating costs—Fewer physical tape drives and cartridges are required as full backups to
tape are eliminated. Also, fewer cartridges are required as small backups stored on multiple virtual
cartridges can be copied to one physical cartridge.
• More efficient use of storage space—Physical tape libraries cannot share storage space with
other physical tape libraries, and physical cartridges cannot share storage space with other physical
cartridges. This unused storage space is wasted.
Storage space is not wasted in a VLS, because VLS storage space is dynamically assigned as it is used.
Storage space is shared by all the libraries and cartridges configured on a VLS.
• Reduced risk of data loss and aborted backups—RAID-based storage is more reliable than
tape storage.
Aborted backups caused by tape drive mechanical failures are eliminated.
VLS9000-series components
A VLS9000-series system consists of at least one VLS9000 node, at least one VLS9000 array (one base
disk array enclosure and three expansion disk array enclosures), and one VLS9000 connectivity kit (two
Ethernet switches for internal inter-node connections and two Fibre Channel (FC) switches for disk storage
connections). See the drawing of racked system below.
Each VLS9000 node contains hardware data compression, dual processors, one 4 Gb quad port FC HBA,
two 2048 MB memory modules, and two 60 Gb SATA hard drives.
52 Hardware setup
1
3
26
FC FC
Port 0 Port 1
6
DIRTY
CLEAN
DIRTY
CLEAN
11744
You can install either the VLS9000 20-port or 32-port connectivity kit in your VLS9000-series system. The
20-port connectivity kit includes two 10-port FC switches and two Ethernet switches. The 32-port
connectivity kit includes two 16-port FC switches and two Ethernet switches.
The 32-port connectivity kit allows you to install more VLS9000 nodes and VLS9000 arrays in your
VLS9000-series system than the 20-port connectivity kit. Two FC ports (one FC port on each FC switch) are
required for each VLS9000 node or VLS9000 array installed in a VLS9000-series system. See VLS9030
capacity for configuration options.
NOTE: For maximum performance, install one VLS9000 array for every VLS9000 node installed.
For maximum capacity, install two VLS9000 arrays for every VLS9000 node installed.
6 7 8 9 10
11488
54 Hardware setup
11 Power supply 1
12 Power supply 2
a. Connect one end of a USB connecter to the USB port. Connect the other end to the USB/Ethernet
adapter. Connect a 1 meter Ethernet cable to the adapter, then connect the Ethernet cable to port 1
of Ethernet Switch 2524 (see Figure 32).
b. Secure the USB/Ethernet adapter to the upper left inside rack brace.
c. Connect a 1 meter Ethernet cable to NIC2. Connect the other end of the cable to port 1 of Ethernet
Switch 2824 (see Figure 33).
d. Connect one end of an Ethernet cable (not included) to NIC1. Connect the other end of the cable to
the existing external network.
e. Connect one end of an FC cable (not provided) to host port 0. Connect the other end to an external
FC switch/fabric that connects to your tape backup hosts.
f. If desired, connect one end of an FC cable (not provided) to host port 1. Connect the other end to
an external FC switch/fabric that connects to your tape backup hosts. Otherwise, connect a
loopback plug to host port 1.
g. Connect one end of an FC cable to storage port 3. Connect the other end to port 0 of Fibre
Channel switch #1 after inserting a transceiver in the port (see Figure 34 or Figure 35).
h. Connect one end of an FC cable to storage port 2. Connect the other end to port 0 of Fibre
Channel switch #2 after inserting a transceiver in the port (see Figure 36 or Figure 37).
i. Connect to the serial port (cable is provided) to access the command-line user interface at initial
configuration. Also connect to this during debug activities. Disconnect from this port during normal
operations.
NOTE: You must connect to the keyboard and video connectors when performing Quick Restore
(keyboard and monitor not included).
NOTE: Use this procedure to install any secondary nodes — node 1, 2, 3, and so on.
6
11489
a. Connect one end of a USB cable to the USB port. Connect the other end of the cable to the
USB/Ethernet adapter. Connect a 1 meter Ethernet cable to the adapter, then connect the Ethernet
cable to the next available port of Switch 2524 (see Figure 32).
b. Secure the USB/Ethernet adapter to the upper left inside rack brace.
c. Connect a 1 meter Ethernet cable to NIC2. Connect the other end of the cable to the next available
port of Switch 2824 (see Figure 33).
d. Connect one end of an FC cable (not provided) to host port 0. Connect the other end to an external
FC switch/fabric that connects to your tape backup hosts.
e. If desired, connect one end of an FC cable (not provided) to host port 1. Connect the other end to
an external FC switch/fabric that connects to your tape backup hosts. Otherwise, connect a
loopback plug to host port 1.
f. Connect one end of an FC cable to storage port 3. Connect the other end to the next available port
of Fibre Channel switch #1 after inserting a transceiver in the port (see Figure 34 or Figure 35).
g. Connect one end of an FC cable to storage port 2. Connect the other end to the next available port
of Fibre Channel switch #2 after inserting a transceiver in the port (see Figure 36 or Figure 37).
56 Hardware setup
1 FC FC
Port 0 Port 1
DIRTY
CLEAN
2
10/100 BASE-T STATUS
DIRTY
CLEAN
11762
SAS cable, SAS port of RAID controller 0 of base disk array enclosure connects to SAS
1
input port of expansion controller 0 of expansion disk array enclosure 0
2 SAS cable, SAS port of RAID controller 1 of base disk array enclosure connects to SAS
input port of expansion controller 1 of expansion disk array enclosure 0
3 SAS cable, SAS output port of expansion controller 0 of expansion disk array enclosure 0
connects to SAS input port of expansion controller 0 of expansion disk array enclosure 1
4 SAS cable, SAS output port of expansion controller 1 of expansion disk array enclosure 0
connects to SAS input port of expansion controller 1 of expansion disk array enclosure 1
5 SAS cable, SAS output port of expansion controller 0 of expansion disk array enclosure 1
connects to SAS input port of expansion controller 0 of expansion disk array enclosure 2
6 SAS cable, SAS output port of expansion controller 1 of expansion disk array enclosure 1
connects to SAS input port of expansion controller 1 of expansion disk array enclosure 2
Remove the tape and end caps from the SAS cables before installing.
a. Verify that both power switches are off for each disk array enclosure in the rack.
b. Connect one end of a SAS cable to RAID controller 0 of the base disk array enclosure. Connect the
other end to SAS input port of expansion controller 0 of expansion disk array enclosure 0.
c. Connect one end of a SAS cable to RAID controller 1 of the base disk array enclosure. Connect the
other end to SAS input port of expansion controller 1 of expansion disk array enclosure 0.
d. Connect one end of a SAS cable to SAS output port of expansion controller 0 of the expansion disk
array enclosure 0. Connect the other end to SAS input port of expansion controller 0 of expansion
disk array enclosure 1.
e. Connect one end of a SAS cable to SAS output port of expansion controller 1 of the expansion disk
array enclosure 0. Connect the other end to SAS input port of expansion controller 1 of expansion
disk array enclosure 1.
f. Connect one end of a SAS cable to SAS output port of expansion controller 0 of the expansion disk
array enclosure 1. Connect the other end to SAS input port of expansion controller 0 of expansion
disk array enclosure 2.
g. Connect one end of a SAS cable to SAS output port of expansion controller 1 of the expansion disk
array enclosure 1. Connect the other end to SAS input port of expansion controller 1 of expansion
disk array enclosure 2.
h. Connect black power cables to power modules on the left.
i. Route the black power cables through the left side of the rack and plug them into a PDM.
1 2 3 4 5 6 13 14 15 16
26
7 8 9 10 11 12 17 18
11766
a. Ensure that the power cable is connected to the switch, as described in the racking instructions.
b. Connect one end of a 2 meter Ethernet cable to the Ethernet port of Fibre Channel switch #1.
Connect the other end of the Ethernet cable to port 23 of Ethernet Switch 2524.
c. Connect one end of an Ethernet cable to port 24 of Ethernet Switch 2824. Connect the other end of
the Ethernet cable to port 24 of Ethernet Switch 2524.
d. Ensure that the Ethernet cables from the NIC2 ports of each node are firmly set in the appropriate
ports on the switch.
e. Connect one end of an Ethernet cable to port 16 of Ethernet Switch 2524. Connect the other end of
the Ethernet cable to RAID controller 0 on the base disk array enclosure, of first array (array 0).
f. Working backwards from port 16 on Ethernet Switch 2524, connect one end of an Ethernet cable to
the next available Ethernet port on Ethernet Switch 2524. Connect the other end of the Ethernet
cable to RAID controller 0 on the base disk array enclosure, of the second array (array 1).
g. Repeat steps f for the next array.
h. Secure Ethernet cables with a Velcro tie to the right side of the rack.
4. On Ethernet Switch 2824:
58 Hardware setup
1 3 5 7 9 11 13 15 17
2 4 6 8 10 12 14 16 18
11765
a. Ensure that the power cable is connected to the switch, as described in the racking instructions.
b. Ensure that the Ethernet cables from the NIC2 ports of each node are firmly set in the appropriate
ports.
c. Connect one end of a 2 meter Ethernet cable to the Ethernet port of Fibre Channel switch #2.
Connect the other end of the Ethernet cable to port 23 of Switch 2824.
d. Connect one end of an Ethernet cable to port 16 of Ethernet Switch 2824. Connect the other end of
the Ethernet cable to RAID controller 1 on the base disk array enclosure, of first array (array 0).
e. Working backwards from port 16 on Ethernet Switch 2824, connect one end of an Ethernet cable to
the next available Ethernet port on Ethernet Switch 2824. Connect the other end of the Ethernet
cable to RAID controller 1 on the base disk array enclosure, of the second array (array 1).
f. Repeat step e for the next array.
g. Secure Ethernet cables with a Velcro tie to the right side of the rack.
5. On Fibre Channel switch #1:
11
11763
Figure 34 Fibre Channel switch #1 port cabling (20-port connectivity kit shown)
1 2 3 4 5 10 11 12 13
6 7 8 9 14 15 16 17
11764
Figure 35 Fibre Channel switch #1 port cabling (32-port connectivity kit shown)
60 Hardware setup
11 FC cable from FC port 0, of RAID controller 0 of 7th array (if present)
12 FC cable from FC port 0, of RAID controller 0 of 6th array (if present)
13 FC cable from FC port 0, of RAID controller 0 of 5th array (if pre
14 FC cable from FC port 0, of RAID controller 0 of 4th array (if present)
15 FC cable from FC port 0, of RAID controller 0 of 3rd array (if present)
16 FC cable from FC port 0, of RAID controller 0 of 2nd array (if present)
17 FC cable from FC port 0, of RAID controller 0 of 1st array
a. Ensure that the Fibre Channel cables from FC port 3, of each node are firmly set in the appropriate
ports.
b. Connect one end of a Fibre Channel cable to port 9 (if 10-port switch) or port 15 (if 16-port switch)
of Fibre Channel switch #1 after inserting a transceiver in the port. Connect the other end of the
Fibre Channel cable to Fibre Channel port 0, of RAID controller 0, of first array (array 0).
c. Working backwards from the last Fibre Channel port on Fibre Channel switch #1, connect one end
of a Fibre Channel cable to the next available Fibre Channel port on Fibre Channel switch #1 after
inserting a transceiver in the port. Connect the other end of the Fibre Channel cable to Fibre
Channel port 0, of RAID controller 0, of the second array (array 1).
d. Repeat step c for the next array.
e. Remove transceivers from any Fibre Channel ports not connected to another device or insert a plug
in the transceiver.
Each unconnected transceiver will generate connection failure notifications.
f. Connect a power cable to the switch. Then, route the AC power cable through the holes in the rack
to the back of the rack and connect the cable to a PDM.
g. Secure the FC cables and the Ethernet cables installed in the previous steps together with Velcro ties.
Route them to the right side of the rack.
6. On Fibre Channel switch #2:
1 2 3 4 5 6 7 8 9 10
11
11763
Figure 36 Fibre Channel switch #2 port cabling (20-port connectivity kit shown)
1 2 3 4 5 10 11 12 13
6 7 8 9 14 15 16 17
11764
Figure 37 Fibre Channel switch #2 port cabling (32-port connectivity kit shown)
a. Ensure that the Fibre Channel cables from FC port 2, of each node are firmly set in the appropriate
ports.
b. Connect one end of a Fibre Channel cable to port 9 (if 10-port switch) or port 15 (if 16-port switch)
of Fibre Channel switch #2 after inserting a transceiver in the port. Connect the other end of the
Fibre Channel cable to Fibre Channel port 0, of RAID controller 1, of first array (array 0).
c. Working backwards from the last Fibre Channel port on Fibre Channel switch #2, connect one end
of a Fibre Channel cable to the next available Fibre Channel port on Fibre Channel switch #2 after
inserting a transceiver in the port. Connect the other end of the Fibre Channel cable to Fibre
Channel port 0, of RAID controller 1, of the second array (array 1).
d. Repeat step c for the next array.
62 Hardware setup
e. Remove transceivers from any Fibre Channel ports not connected to another device or insert a plug
in the transceiver.
Each unconnected transceiver will generate connection failure notifications.
f. Connect a power cable to the switch. Then, route the AC power cable through the holes in the rack
to the back of the rack and connect the cable to a PDM.
g. Secure the fibre channel cables and the Ethernet cables installed in the previous steps together with
Velcro ties. Route them to the right side of the rack
The VLS9000 system hardware installation is complete. Continue installation by configuring the identities
of each node.
NOTE: HP recommends leaving L&TT installed on the host so it is readily available if there is a need to run
diagnostics if directed by HP support.
Benefits
Integrating a VLS12000/300 into your existing storage and backup infrastructure delivers the following
benefits:
• Fast data restoration and backup performance: The HP StorageWorks 12000 Virtual Library
System EVA Gateway is a multi-node gateway solution for the EVA that easily scales in performance
and capacity to 4800 MB/sec and 1080 TB of useable storage, with hardware compression.
Accelerated deduplication retains up to 50 times more data readily available on disk
• Cost-effective data protection and on-going management: The VLS12000 EVA Gateway
is deployed, managed and operated just like a tape library minimizing disruptions to your environment.
Emulations include the HP StorageWorks ESL and MSL series tape libraries as well as HP 1/8 G2
Autoloader and all HP Ultrium Tape Drives as well as DLT7000, DLT8000 and SDLT 320 Tape Drives.
• Reliability: While the VLS12000 EVA Gateway contains reliable hardware featuring hot plug disk
drives, standard redundant power supplies and fans, the real reliability is for your data protection
process. Simplifying the process by which storage is shared means fewer errors occur.
• Easy operation: The VLS12000 EVA Gateway is deployed, managed and operated just like a tape
library minimizing disruptions to your environment. Emulations include the HP StorageWorks ESL and
MSL series tape libraries as well as HP 1/8 G2 autoloaders and all HP Ultrium Tape Drives and
DLT7000, DLT8000 and SDLT 320 Tape Drives.
• Automigration: The HP Virtual Library Systems support Automigration which allows VLS to move
data to a physical library or another VLS. Smart copy will be further enhanced when HP delivers
low-bandwidth replication.
• Accelerated deduplication: The VLS12000 EVA Gateway now supports capacity licensing for
Accelerated deduplication. The data deduplication capacity LTU will be licensed by the number of EVA
LUNS presented to the VLS. One license per LUN (T9709A) is required. Accelerated deduplication
retains up to 50 times more data readily available on disk.
NOTE: For more information on deduplication with the VLS12000, refer to the Data Deduplication page
found at the HP Data Storage web site: http://welcome.hp.com/country/us/en/prodserv/storage.html.
64 Hardware setup
A notification alert is generated by the VLS software when a hardware or environmental failure is detected
or predicted. VLS notification alerts are displayed on Command View VLS, and can also be sent as mail to
the mail addresses you specify and/or SNMP traps to the management consoles you specify.
For more information about viewing VLS hardware status, and/or receiving VLS notification alerts by mail
or as SNMP traps, see the HP StorageWorks 12000/300 Virtual Library System user guide, Monitoring
chapter.
Redundancy
The VLS12000/300 includes some important redundancy features:
• Redundant fans
Each node includes redundant fans. If a fan fails in a node (head unit), the remaining fans run at a
faster speed, temporarily providing enough cooling.
• Redundant power supply
Each node includes a redundant power supply. With redundant power supplies, if one power supply
fails in a node, the remaining functional power supply provides enough power for the node to function.
HP recommends that the primary power supplies be connected to a separate power source at the site
from the redundant power supplies.
CAUTION: Replace a failed fan or power supply as soon as possible to maximize the life expectancy of
the remaining fan(s) or power supply and to maintain redundancy.
VLS12000/300 components
A VLS12000/300 consists of at least two nodes (one primary node and between one and seven
secondary nodes) and dual LAN switches for internal inter-node connections. See the drawing of racked
nodes below. Each node contains dual processors, two dual port FC HBAs, four 512 MB memory modules,
and two 80 Gb hard drives. No external storage is included with the VLS12000/300; instead, the
gateway uses external storage in existing arrays.
26
11149
Adding nodes and licenses increases the VLS12000/300 storage capacity as shown in Table 4. Adding
nodes also increases the performance. See the HP StorageWorks VLS12000/300 Virtual Library System
Quickspec on the HP website (http://h18006.www1.hp.com/products/storageworks/6000vls) for
performance data.
Table 4 VLS12000/300 capacity
NOTE: Minimum capacity for EVA LUNs is 100 GB. Ensure that all EVA LUNs attached to the Gateway
meet this requirement.
66 Hardware setup
• Before installing the VLS12000 or VLS300, ensure that the external FC switch/fabric has either two
external FC switches/fabrics or two zones. This is required so that there are two data pathways from
the VLS to the EVA. Dual pathing provides path balancing and transparent path failover on the
VLS12000 and VLS300.
Network Interconnection
Node 0 Node 1
Command View
EVA Fabric 2
Fabric 1
Management
Server
FP2 FP1
Controller Controller
A Cache B
Mirror
Ports
FC Loop Switches
NOTE: Path failure can reduce throughput for backup and restore operations.
• Balance data traffic across both controllers of the array (called A and B in Command View VLS). To do
this, ensure that the preferred path for half of the VRaids in the array is set to Path A-Failover only, and
the preferred path for the other half of the VRaids in the array is set to Path B-Failover only.
NOTE: HP recommends leaving L&TT installed on the host so it is readily available if there is a need to run
diagnostics if directed by HP support.
68 Hardware setup
HP StorageWorks 1000i Virtual Library System
The HP StorageWorks 1000i Virtual Library System (VLS1000i) is a RAID 5, serial ATA disk-based LAN
backup device that emulates standalone HP LT02 drives and HP Autoloader 1/8 with LT02 physical tape
drives, allowing you to perform disk-to-virtual tape (disk-to-disk) backups using your existing backup
applications.
The VLS1000i emulates the HP Autoloader 1/8 with LT02 physical tape libraries, including the tape drives
and cartridges inside the libraries. You determine the number of tape libraries a VLS1000i emulates, and
the number of tape drives and cartridges included in each tape library to meet the needs of your
environment. You configure the size of the virtual cartridges in your VLS1000i, which provides even more
flexibility. The VLS1000i emulates up to 6 tape libraries, 12 tape drives, and 180 cartridges.
The VLS1000i accommodates mixed IT platform and backup application environments, allowing all your
servers and backup applications to access the virtual media simultaneously. You specify which servers are
allowed to access each virtual library and tape drive you configure.
Benefits
Integrating a VLS1000i into your existing storage and backup infrastructure delivers the following benefits:
• Faster backups
The VLS1000i is optimized for backups and delivers faster performance than a simple disk-to-disk
solution. The VLS1000i emulates many more tape drives than are available in physical tape libraries,
allowing more hosts to run backups concurrently.
• Faster single file restores
A single file can be restored much faster from disk than tape.
• Lower operating costs
Fewer physical tape drives and cartridges are required as full backups to tape are eliminated. Also,
fewer cartridges are required as small backups stored on multiple virtual cartridges can be copied to
one physical cartridge.
• More efficient use of storage space
Physical tape libraries cannot share storage space with other physical tape libraries, and physical
cartridges cannot share storage space with other physical cartridges. This unused storage space is
wasted.
Storage space is not wasted in a VLS, because VLS storage space is dynamically assigned as it is used.
Storage space is shared by all the libraries and cartridges configured on a VLS1000i.
• Reduced risk of data loss and aborted backups
RAID 5-based storage is more reliable than tape storage.
Aborted backups caused by tape drive mechanical failures are eliminated.
Important concepts
To understand the configuration of the backup network and how it fits into the local-area network (LAN),
review the following sections.
Internet SCSI (iSCSI) protocol
Internet SCSI (iSCSI) is a standard protocol for universal access to shared storage devices over standard,
Ethernet-based transmission control protocol/Internet protocol (TCP/IP) networks. The connection-oriented
protocol transports SCSI commands, data, and status across an IP network.
The iSCSI architecture is based on a client-server model. The client is a host system that issues requests to
read or write data. iSCSI refers to a client as an initiator. The server is a resource that receives and
executes client requests. iSCSI refers to a server as a target.
File servers, which store the programs and data files shared by users, normally play the role of server. With
the VLS1000i, the application and backup servers within your network act as clients or initiators and the
VLS1000i acts as a server or target. The initiators can either be iSCSI software simulation or host bus
adapters (HBAs) on the server that is being backed up.
NOTE: The connection from the Client — Tape can be either FC or direct attached SCSI.
Clients
Ethernet
NOTE: Data compression can be used, but it reduces the data transfer speed significantly.
70 Hardware setup
Retention planning
Retention planning and sizing go hand in hand. How long do you need to keep data on disk? How many
full backups do you want to keep on disk? How many incremental backups? How do you want to optimize
retention times of the VLS1000i? Retention policies help you recycle virtual media. Bear the following
considerations in mind as you plan retention policies:
• If the data’s useful life is too short to warrant backup to tape, you might choose to keep it on disk.
• Once the retention period expires, the virtual media is automatically recycled (remember that you never
remove tapes from a virtual library so you want the backup application to keep re-using the same
virtual tapes based on their retention recycling periods).
• In your backup application you should set the tape expiration dates (that is, when the tape is marked as
worn out) high because virtual media does not wear out.
• Backup-job retention time is for virtual media.
• Copy-job retention time is for physical media.
• When copying through the backup application, the virtual and physical pieces of media are tracked
separately and the retention times should be considered and set individually.
IMPORTANT: If there is more than one VLS1000i on the same subnet, HP strongly recommends setting the
bar code seed manually for each appliance to avoid bar code conflicts.
NOTE: HP recommends leaving L&TT installed on the host so it is readily available if there is a need to run
diagnostics if directed by HP support
HP dynamic deduplication
Data deduplication is a method of reducing storage needs by eliminating redundant data so that over time
only one unique instance of the data is actually retained on disk. As a result, up to 50x more backup data
can be retained in the same disk footprint.
Adding data deduplication to disk-based backup delivers a number of benefits:
• A cost effective way of keeping your backup data on disk for a number of weeks or even months. More
efficient use of disk space effectively reduces the cost-per-gigabyte of storage and the need to purchase
more disk capacity.
• Making file restores fast and easy from multiple available recovery points. By extending data retention
periods on disk, your backup data is more accessible for longer periods of time, before archiving to
72 Hardware setup
tape. In this way lost or corrupt files can be quickly and easily restored from backups taken over a
longer time span.
• Ultimately, data deduplication makes the replication of backup data over low bandwidth WAN links
viable (providing offsite protection for backup data) as only changed data is sent across the connection
to a second device (either a second identical device or one that comes from this product family).
How it works
Deduplication works by examining the data stream as it arrives at the storage appliance, checking for
blocks of data that are identical and eliminating redundant copies. If duplicate data is found, a pointer is
established to the original set of data as opposed to actually storing the duplicate blocks-removing or
"deduplicating" the redundant blocks from the volume. The key here is that the data deduplication is being
done at the block level to remove far more redundant data than deduplication done at the file level where
only duplicate files are removed.
Data deduplication is especially powerful when it is applied to backup, since most backup data sets have
a great deal of redundancy. The amount of redundancy will depend on the type of data being backed up,
the backup methodology and the length of time the data is retained.
Example. Backing up a large customer database that gets updated with new orders throughout the day.
With the typical backup application you would normally have to back up, and more importantly store, the
entire database with each backup, (and even incremental backups will store the full database again). With
block-level deduplication, you can backup the same database to the device on two successive nights and,
due to its ability to identify redundant blocks, only the blocks that have changed will be stored. All the
redundant data will have pointers established.
The HP approach to deduplication - D2D and VLS
Recognizing the differing needs of the small and medium businesses versus large and enterprise data
centers, HP has selected two different deduplication technologies to match each requirement.
HP Dynamic deduplication for HP StorageWorks D2D Backup Systems - meeting the needs of smaller IT
environments with requirements for low cost solutions, smaller storage capacities, ease of use and broad
compatibility.
HP Accelerated deduplication for HP StorageWorks Virtual Library Systems (VLS) - delivering maximum
benefit for data center environments by being optimized for performance and scalability. Accelerated
deduplication takes place outside of the backup window, therefore, allowing all system resources to be
focused on completing the backup before starting deduplication. Accelerated deduplication also retains a
full copy of the latest backup so that restore times are exceptionally fast.
HP Dynamic deduplication
The HP patented Dynamic deduplication algorithm has been designed specifically for smaller IT
environments, such as remote and branch offices and small data centers, to provide for low cost solutions
with a small footprint. It uses inline deduplication based on hash algorithms with additional levels of error
prevention and correction to verify the integrity of data backup and restore. Importantly and unlike some
other forms of data deduplication technology the HP Dynamic deduplication is independent of the data
format recorded and works with most the leading backup application packages.
What deduplication ratio can I expect?
The actual data deduplication ratio you can expect will depend on a number of factors including; the type
of data, the backup methodology used, the length of time you retain your data. However, assuming
standard business data mix and extended on disk retention (periods of more than 12 weeks) you could
expect to see:
20:1 capacity ratio; assuming a weekly, full and daily incremental backup model
50:1 capacity ratio; assuming daily, full backups
For more information on achieving deduplication ratios, go to: http://www.hp.com/go/deduplication
Compatibility
The HP D2D Backup Systems are supported on servers that use Microsoft Windows or Linux operating
systems, including HP ProLiant, HP Integrity Servers, and a variety of third-party servers.
74 Hardware setup
Figure 44 Back Panel of HP StorageWorks D2D4000 Backup System
1 Power sockets
2 Network port 1 - always used for data connection
3 Network port 2 - used for data connection only if network is configured for
dual-port IP addresses
4 Management LAN port - do NOT use for data connection
5 PCI slots
Ultrium 1760 80
Ultrium 960 80
Ultrium 920 60
Ultrium 448 25
Ultrium 232 16
SDLT 600 36
WORM technology
WORM (or Write Once, Read Many) storage is a data storage technology that allows information to be
written to storage media a single time, and read many times, preventing the accidental or intentional
altering or erasing the data. Driven by the recent growth of legislation in many countries, usage of WORM
storage is increasing for archiving corporate data such as financial documents, e-mails, and health
records. To meet the demands of this data growth, HP now supports WORM technology in the Ultrium
1840, Ultrium 1760, Ultrium 960, Ultrium 920, and SDLT600 tape drives. WORM tape drive technology is
supported in a variety of HP StorageWorks Tape Libraries, and is supported by many backup applications.
In addition to WORM media, the Ultrium 1840, Ultrium 1760, Ultrium 960, Ultrium 920, and SDLT600
tape drives are capable of reading and writing to standard Ultrium and SDLT media respectively. WORM
media can be mixed with other traditional media by using mixed media solutions as documented by HP.
For more information about support and compatibility visit http://www.hp.com/go/tape or
http://www.hp.com/go/ebs.
Ultrium performance
To optimize the performance of Ultrium tape drives in a Storage Area Network (SAN):
• Ensure that the source of the data to be backed up can supply the data at a minimum of 120MB/sec
native for Ultrium 1840 and 80MB/sec native for Ultrium 960 drives.
To optimize the performance of Ultrium tape drives in a UNIX environment:
• Do not use the native backup applications. UNIX tar and cpio provide poor performance.
• Use a third-party backup application.
• Structure the file system so it can make use of the concurrency feature offered in almost all UNIX
third-party backup applications.
Concurrency is an alternative way of streaming high-performance tape drives. This means backing up
multiple data sources simultaneously to a single tape drive. The format on tape is then an interleaf of
the data on the disks. Verify that the backup software supports concurrency. HP Data Protector, EMC
NetWorker, and Symantec NetBackup all support concurrency. This technique can also be applied to
network backups where the backup jobs from independent hosts are interleaved as they are passed
over the network and are written to tape.
NOTE: Concurrency will increase backup performance; however, restore performance will be negatively
impacted. The files are not sequential. Rather, they are broken up and distributed across the tape.
76 Hardware setup
HP StorageWorks Interface Manager and Command View TL
The HP StorageWorks Interface Manager provides the first step toward automating EBS. The Interface
Manager card is a management card designed to consolidate and simplify the management of multiple
Fibre Channel interface controllers installed in the library. It also provides SAN-related diagnostics and
management for library components including interface controllers, drives, and robotics. The Interface
Manager card, in conjunction with HP StorageWorks Command View TL software, provides remote
management of the library via a serial, telnet, or web-based GUI interface.
IMPORTANT: Command View TL, the Interface Manager card, and interface controllers have specific
firmware dependencies. See the Interface Manager Firmware Required Minimum Component table in the
HP Enterprise Backup Solutions Compatibility Matrix.
NOTE: Some of the default host map settings that the Interface Manager applies to the library may not be
appropriate for every data center. Due to the automation of the host mappings, certain customer data
centers many need to customize the mapping for their EBS environment.
Library partitioning
Partitioning provides the ability to create separate, logical tape libraries from a single tape library. Logical
libraries (partitions) behave like a physical library to backup and restore applications. The ESL9000, ESL
E-Series, and the EML E-Series libraries all support partitioning. The advanced version of Secure Manager
in conjunction with HP StorageWorks Command View TL is required to enable and configure partitioning
in these libraries.
Mixed media support
Partitioning a library enables the use of mixed media in a library with various backup applications. A
library can consist of multiple drive and media types. Similar drive types and media can be grouped
together as one partition, with a maximum of six partitions.
NOTE: For additional information, see the HP StorageWorks Partitioning in an EBS Environment
Implementation Guide available on the HP website: http://www.hp.com/go/ebs.
NOTE: Any customizations created in Manual mode are lost when changing from Manual to Automatic
mode.
78 Hardware setup
With the Secure Manager license installed, Advanced Secure Manager will be enabled. Advanced Secure
Manager allows the library administrator to grant or deny hosts access to any combination of devices in
the library. Unlike Basic Secure Manager, each host can have a unique map. For example, the master
server in a backup solution might be the only server that can control the library robotics. In this scenario, all
backup servers may be granted access to the tape drives in the library, but will be denied access to the
robotics controller. The master server would be granted access to the robotics controller and possibly one
or more tape drives.
NOTE: If the tape library robotics controller is not discovered by the Interface Manager, the discovery
process cannot complete successfully. In this case, Secure Manager will be disabled until robotics
connectivity is established. This is done because the Interface Manager must rely on the library controller to
identify the drives belonging to the library and their logical positions.
After the robotics controller is discovered, the Interface Manager issues a “read element status” command
to determine the drive configuration of the library. The Interface Manager uses the read element status data
to determine the number of drives in the library, the logical position of each drive in the library, and the
serial number for each drive. The Interface Manager can then correlate the serial numbers returned by the
robotics controller with the serial number reported by each tape drive to determine the physical location of
each tape drive in the library.
Example:
The tape drive connected to SCSI Bus 0 on IC 1 reports a serial number of “XYZ.” Also assume the robotics
controller reports that the library has eight tape drives and logical tape drive 4 has a serial number of
“XYZ.” The Interface Manager can then use the serial number “XYZ” to identify the tape drive at SCSI Bus
0, IC 1 as logical tape drive 4. If the physical location for each reported tape drive is correlated to the
logical drive number, then the Interface Manager discovery process completes successfully and Secure
Manager will be available.
NOTE: Secure Manager is also available for target devices that were previously discovered but are
currently offline. For example, if tape drive 2 was initially discovered and subsequently taken offline for
repair, Secure Manager will operate with the previous device attributes until the drive is brought back
online.
LUN FC Port 0
0 Robotics
1 Drive 1
2 Drive 2
3 AF
NOTE: Any changes to device access thereafter will follow the rules for modifying an existing map rather
than the rules for creating new maps.
Table 8 New and modified maps on a 1FC port FC interface controller for a host that cannot access
drive 1
NEW MODIFIED
LUN FC Port 0 LUN FC Port 0
0 Robotics 0 Robotics
80 Hardware setup
Table 8 New and modified maps on a 1FC port FC interface controller for a host that cannot access
drive 1
NEW MODIFIED
1 Drive 2 1
2 2 Drive 2
The new map did not have access to drive 1 when the map was first created. The modified map had
access to drive 1 when the map was created but was modified to remove access to it.
4. If there is more than one FC port on a FC interface controller, load balancing algorithms are used
Tape devices attached to a particular FC interface controller are sorted in ascending order by logical
position in the library. The first two tape devices are assigned to the first FC port at the next available
LUNs. The next two tape devices are assigned to the second FC port at the next available LUNs. If there
are more than two tape devices per FC port, then the following tape devices are assigned in a similar
fashion starting over at the first FC port.
Secure Manager takes the following steps when load balancing target devices across FC ports on a dual
port FC interface controller (such as an e2400-160):
1. If the robotics controller is attached to the FC interface controller, it is mapped to FC port 0 at LUN 0.
Table 9 Load balancing with robotics and four tape drives on a two FC port FC interface controller
1 Drive 1 Drive 4
2 Drive 2
2. The tape devices attached to the FC interface controller are sorted in ascending order by logical
position in the library.
3. The first two tape devices are assigned to the first FC port at the next available LUNs.
Table 10 Load balancing with two tape drives on a two FC port FC interface controller
NOTE: The drives are both assigned to FC Port 0 so that if drives 3 and 4 are added later, the maps will
be contiguous.
1 Drive 2
Table 12 Load balancing with four tape drives on a two FC port FC interface controller
1 Drive 2 Drive 4
5. If there are remaining tape devices, then the next two tape devices are assigned to the first FC port, the
next two to the second FC port. This is repeated until all tape devices are assigned.
Table 13 Load balancing with robotics and eight tape drives on a two FC port FC interface controller
1 Drive 1 Drive 4
2 Drive 2 Drive 7
3 Drive 5 Drive 8
4 Drive 6
The above algorithms are applied to all FC interface controllers in the library.
5. Maintain FC Port/LUN map after cabling change on same interface controller.
If a device’s cable is moved from one port to another on the same FC interface controller, Secure Manager
attempts to maintain the current FC Port/LUN mapping for the device.
Because devices are mapped by logical position, the Interface Manager can correct for devices that have
been cabled to different ports on the FC interface controller at power up and FC interface controller
reboot. This remapping is not available if the tape device has been moved to a different FC interface
controller. The purpose of this feature is to maintain a consistent view of the devices for the hosts connected
to the library.
6. Advanced Secure Manager map creation is done like Basic Secure Manager but then removes devices and gaps,
if necessary.
Advanced Secure Manager mapping starts the same way as Basic Secure Manager LUN mapping. Then
for each host that cannot see the entire library, devices are removed from the map, and any gaps they
make are removed.
The bottom line is that the same rules for creating maps apply to both Basic and Advanced Secure
Manager.
• LUN numbers are only assigned to the devices the host can access.
• LUN numbers always start at 0 and are consecutive (no gaps).
82 Hardware setup
NOTE: Any changes to device access thereafter follow the rules for modifying an existing map rather than
the rules for creating new maps.
Table 14 Advanced Secure Manager Mapping step 1: Maps are created using the same rules as Basic
Secure Manager
1 Drive 1 Drive 4
2 Drive 2
Table 15 Advanced Secure Manager Mapping step 2: Devices the host cannot access are removed
.
1 Drive 1 Drive 4
2 Drive 2
Table 16 Advanced Secure Manager Mapping step 3: Remove gaps in the map
.
1 Drive 2
7. Maintain current FC Port/LUN assignments when adding devices in Advanced Secure Manager.
If an existing map is modified to add a device, previous FC Port/LUN assignments are retained in an
attempt to present a consistent device mapping to the host. The device map is not re-ordered when devices
are added.
NOTE: Devices are added to the FC Port that they would have been assigned to using Basic Secure
Manager Rules.
LUN FC Port 0
0 Robotics
1 Drive 2
2 Drive 3
3 Drive 4
LUN FC Port 0
0 Robotics
1 Drive 2
2 Drive 3
3 Drive 4
4 Drive 1
8. If modifying an existing map to remove device access with Advanced Secure Manager, any gap made is retained.
If Advanced Secure Manager is used to remove access to a device for a host with a preexisting map, any
gap this change makes is maintained.
Table 19 1 FC port Advance Secure Manager device access removal: Removing device
LUN FC Port 0
0 Robotics
1 Drive 1
2 Drive 2
Table 20 1 FC port Advance Secure Manager device access removal: End result
LUN FC Port 0
0
1 Drive 1
2 Drive 2
Some operating systems have issues with non-contiguous LUN maps. Therefore it is recommended to avoid
gaps in the LUN map if at all possible.
There are three methods of removing the gap(s) created by this process.
1. All hosts using this map need to be removed and re-added, and a new map needs to be created.
2. Changing the Mode from Automatic to Manual and back to Automatic clears out all customizations (not
recommended unless the number of customizations is low).
3. Add access to a device connected to that FC interface controller, and that would normally be mapped
to that FC port, and it will fill the first gap in the LUN order.
NOTE: Options 1 and 2 require the backup software to reconfigure the library. Option 3 should only
require reconfiguring the software for the new device.
9. If devices are removed and added with Advanced Secure Manager, attempts are made to not disturb the other
device mappings.
If host access is changing to add and remove devices, efforts are made to not disturb the devices. If
possible, the newly added device fills the gap made by the removed device. This is done to retain the LUN
assignments of the other devices. A device is only added back to the FC port that it would have been
assigned to in Basic Secure Manager (that is, Robotics and drives 1 and 2 will always be on FC port 0).
84 Hardware setup
NOTE: When devices need to be removed and added, it is recommended to remove devices first and
then add new devices second, to prevent or lessen the chance of creating gaps in LUN maps, which may
create problems in some operating systems.
Table 21 Advance Secure Manager device access change step 1 (1FC Port): Remove devices
LUN FC Port 0
0 Robotics
1 Drive 1
2 Drive 2
Table 22 Advance Secure Manager device access change step 2 (1 FC Port): Add devices to fill gaps if
possible
LUN FC Port 0
0 Drive 3
1 Drive 1
2 Drive 2
Example:
A host has access to the robotics and drives 2, and 4.
Table 23 Map from 2 FC port IC with access to the robotics and drives 2 and 4
1 Drive 2
1 Drive 2
1 Drive 2 Drive 3
Table 26 Map of four drive library on a two FC port IC with Direct Backup enabled on Drive 3 or 4 and
.
AF visible
1 Drive 1 Drive 4
2 Drive 2 AF
Example:
AF mapping when access to a device is added.
Table 27 LUN map where access to drive 2 is not granted and Direct Backup is enabled for drive 1
LUN FC Port 0
0 Robotics
1 Drive 1
2 AF
LUN FC Port 0
0 Robotics
1 Drive 1
2 Drive 2
3 AF
Example:
AF mapping when access to a device is removed.
Table 29 LUN map where access to drive 2 is being removed and direct backup is enabled for drive 1
LUN FC Port 0
0 Robotics
1 Drive 1
2 Drive 2
3 AF
86 Hardware setup
Table 30 AF map placement after drive 2 is removed
LUN FC Port 0
0 Robotics
1 Drive 1
2 AF
11. If an HBA has access to the robotics for a library partition, the logical robotics device is added as the next
available LUN on the IC physically connected to robotics.
Each partition of a partitioned library has its own logical robotics device. This device is mapped to the FC
port 0 of the IC that is physically connected to the robotics. The logical robotics device is first in order of
devices for that partition but it will not have any priority over devices that have already been mapped to a
particular HBA. In other words, it will have the highest priority in the new devices added to the map but it
will not displace any device previously added to the map.
NOTE: When partitioning is in use, only logical (or virtual) robotics devices are mapped. The physical (or
actual) robotics device will not appear in any LUN map.
Table 31 A map for an HBA with access to the robotics and drives for Partition 1 and Partition 2
LUN FC Port 0
0 Robotics (Partition 1)
3 Robotics (Partition 2)
Partition 1 has drives 1 and 2. Partition 2 has drives 3 and 4. All drives and robotics are connected to a 1
host port IC, and the HBA was given access to Partition 1 first.
Table 32 A map for an HBA with access to the robotics on two partitions
LUN FC Port 0
0 Robotics (Partition 1)
1 Robotics (Partition 2)
The IC physically connected to the robotics has only one host port and is not connected to any drives.
NOTE: If a host has access to the robotics and drives for many partitions of a partitioned library, then the
map for the IC connected to the robotics could exceed eight LUNs. If this occurs, then ensure that the HBA,
OS, drive, and software have support for more than eight LUNs.
Table 33 A map for an HBA with access to the drives for Partition 1 and Partition 2
Partition 1 has physical drives 1 and 2. Partition 2 has physical drives 3 and 4. All drives are connected to
a two FC port IC, and the HBA was given access to Partition 1 first.
Partition 2 has physical drives 3 and 4. Physical drives 3 and 4 are connected to a two host FC port IC.
The physical robotics is connected to a different IC not represented in Table 34.
13. The order each partition is mapped depends on the order that the HBA was added to partitions.
Because the mapping occurs when each HBA is added to a partition, the order that an HBA is added to
partitions governs the order in which each partition’s devices show up in the maps for that HBA. For this
reason, HP recommends that each HBA be added to partitions in order starting with partitions containing
the lowest numbered physical drives and ending with the highest numbered physical drives.
Example:
An HBA is added to Partition 1 and then Partition 2. Partition 1 contains physical drive 1. Partition 2
contains physical drive 2. The robotics, drive 1 and drive 2 are all connected to an IC with one host port.
LUN FC Port 0
0 Robotics (Partition 1)
LUN FC Port 0
0 Robotics (Partition 1)
2 Robotics (Partition 2)
88 Hardware setup
Example:
An HBA is added to Partition 2 and then Partition 1. Partition 1 contains physical drive 1. Partition 2
contains physical drive 2. The robotics, drive 1 and drive 2 are all connected to an IC with one host port.
LUN FC Port 0
0 Robotics (Partition 2)
LUN FC Port 0
0 Robotics (Partition 2)
2 Robotics (Partition 1)
90 Hardware setup
Table 41 describes common symptoms relating to the Interface Manager card and how to resolve them.
Interface Manager card not • Power on the library. Observe status and link
powered on or in ready state LEDs.
• Interface Manager must be at firmware I120
or higher on an ESL E-series library.
• Interface Manager must be at firmware I130
or higher if connected to an e2400-FC 2G.
Interface Bad network connection • Verify that the Interface Manager card is
Manager card properly connected to the FC interface
does not detect controllers and that the cables are good.
one or more FC
interface • Use LEDs to troubleshoot Ethernet cabling.
controllers • See the HP StorageWorks ESL E-Series
Unpacking and Installation Guide for more
information.
Incorrect interface controller, or Make sure that the e2400-160 interface controller
controller has less than minimum has lettering to the side of the ports. If lettering is
required firmware above or below the ports, then the wrong
controller type was installed. Contact the service
provider.
Update the firmware to the latest version as
indicated in the HP Enterprise Backup Solutions
Compatibility Matrix, and restore the defaults on
the interface controller (e2400-160 or
e1200-160).
Defective Interface Manager card or Observe status and link LEDs. Replace defective
FC interface controller card or controller.
Drive not powered on or in ready • Make sure the drive is not set to off.
state
• Troubleshoot the drive.
Command View Incompatible browser version or • Make sure to use a minimum of Microsoft
TL does not run in Java support not enabled Internet Explorer v6.0 SP1 or later, or
the browser Netscape Navigator v6.2 or later.
• Make sure that Java support is enabled in the
browser.
Java Runtime Environment (JRE) not Download and install the Java 2 Platform,
installed Standard Edition v1.42 or later from
http://wwws.sun.com/software/download/
technologies.html.
Bad network connection or network • Check all physical network connections. If the
down connections are good, contact the network
administrator.
• Ping the management station. If pinging fails
and the IP address is correct, contact the
network administrator.
92 Hardware setup
Fibre Channel interface controller and Network Storage Router
The HP StorageWorks FC interface controller and HP StorageWorks Network Storage Router (NSR) are
Fibre Channel-to-SCSI routers that enable a differential SCSI tape device to communicate with other devices
over a SAN.
Table 42 outlines the recommended maximum device connections per SCSI bus and per Fibre Channel
port. The purpose of these recommendations is to minimize SCSI issues and maximize utilization of the
Fibre Channel bandwidth.
NOTE: The Interface Manager uses custom algorithms to determine how devices are presented and
mapped. See the white paper Command View TL/Secure Manager Mapping Algorithms for additional
information.
Drive Type Number of drives Number of drives Number of drives Number of drives
per SCSI bus per 1 Gb FC per 2 Gb FC per 4 Gb FC
Ultrium 232 2 2 4 4
Ultrium 448 1 1 2 2
Ultrium 960 1 1 1 2
SDLT 600 1 1 2 2
NOTE: By default, the Fibre Channel port speed is set to 4 GB/s. Changes to the Fibre Channel port
speed must be manually set, such as for 2 GB/s. If set incorrectly and the router is plugged into a Loop or
Fabric, the unit may receive framing errors, which can be found in the trace logs, and the fiber link light
will be off because of the incorrect Fibre Channel link speed.
NOTE: The router can respond to multiple target IDs (also known as Alternate Initiator ID) on a SCSI bus.
This feature is not currently supported with HP tape libraries.
Both Fibre Channel ports and SCSI buses have pre-defined maps.
There are four pre-defined maps:
• Indexed (default)
• Port 0 device map
• Auto assigned
• SCC
NOTE: If the fabric port ID of a host HBA changes, then the tape library Fibre Channel interface
controller(s) may need to be rebooted to pick up the new port ID and ensure that the proper device map is
given to the host HBA.
94 Hardware setup
Indexed maps
An indexed map is initially empty.
Port 0 device maps
The Port 0 device map is used when editing and assigning oncoming hosts.
Auto assigned maps
An Auto assigned map is built dynamically and contains all of the devices found during discovery. This
map changes automatically any time the discovery process finds a change in the devices attached. This
map cannot be modified.
SCC maps
An SCC map is only available on Fibre Channel ports and contains only a single entry for LUN 0. This
LUN is a router controller LUN. Access to attached devices is managed using SCC logical unit addressing.
Buffered tape writes
This option is designed to enhance system performance by returning status on consecutive write commands
prior to the tape device receiving data. In the event that data does not transfer correctly, the router will
return a check condition on a subsequent command.
Commands other than Write are not issued until status is received for any pending write, and status is not
returned until the device completes the command. This sequence is appropriate for tasks such as file
backup or restore.
Some applications require confirmation of individual blocks being written to the medium, such as for audit
trail tapes or log tapes. In these instances, the Buffer Tape Writes option must be disabled.
Connecting the router
When physically connecting the tape library to the router, HP strongly recommends that the tape devices
be connected in sequential order. For example, the library controller and the first pair of tape drives (drive0
and drive1) be connected to the first SCSI bus; the second pair of tape drives (drive2 and drive3) be
connected to the second SCSI bus, and so on. Connecting the devices in this manner provides for a
consistent view of the devices across platforms and, should problems arise, aids in the troubleshooting
process.
Network Storage Routers have limited initiators for single- and dual-port routers
The maximum number of active initiators for the HP StorageWorks Network Storage Router (NSR) is 250
on 2Gb routers with firmware 5.6.87 or newer, and on the 4Gb NSR/IFC with firmware 5.7.18 or newer.
Prior to these indicated firmware revisions, the maximum number of initiators was 128.
An initiator is one that has logged into the router but does not have to be transmitting data or be currently
logged in. The initiator count includes: hosts, switches, array controllers, and FC router ports. Each instance
of the initiator counts toward this maximum. For example, an initiator visible by two FC router ports
increases the FC active initiator count by two.
When the maximum number of active initiators is exceeded, the router will allow a new FC initiator to log
into the router by accepting PLOGI (Port Login), or PRLI (Process Login) commands. However, if a SCSI
command is sent to the router from that FC initiator, it will be rejected with a Queue Full response. If
commands from an FC initiator are consistently being rejected with a Queue Full response, the router
environment must be examined to see if the number of active FC initiators exceeds the maximum of 250 on
a dual FC port router, or 128 on a single FC port router.
To prevent issues with too many active initiators logging into the NSR, limit the number of initiators by
creating a FC switch zone that has less than 128 initiators for a single FC port router, or less than 250
initiators for a dual FC port router.
8 Gbps Fibre Channel via 62.5 micron multi-mode fiber optic cable and short-wave SFPs. Up to 50
meters per cable segment.
4 Gbps Fibre Channel via 50 micron multi-mode fiber optic cable and short-wave SFPs. Up to 150
meters per cable segment.
4 Gbps Fibre Channel via 62.5 micron multi-mode fiber optic cable and short-wave SFPs. Up to 70
meter per cable segment.
2 Gbps Fibre Channel via 50 micron multi-mode fiber optic cable and short-wave SFPs. Up to 300
meters per cable segment.
2 Gbps Fibre Channel via 62.5 micron multi-mode fiber optic cable and short-wave SFPs. Up to 150
meters per cable segment.
2 Gbps Fibre Channel via 9 micron single-mode fiber optic cable and long-wave SFPs. Up to 10 X
kilometers per cable segment.
2 Gbps Fibre Channel via 9 micron single-mode fiber optic cable and extended reach SFPs. Up to X
35 kilometers per cable segment.
1 Gbps Fibre Channel via 50 micron multi-mode fiber optic cable and short-wave GBICs. Up to 500
meters per cable segment.
1 Gbps Fibre Channel via 62.5 micron multi-mode fiber optic cable and short-wave GBICs. Up to
200 meters per cable segment.
1 Gbps Fibre Channel via 9 micron single-mode fiber optic cable and long-wave GBICs. Up to 35 X
kilometers per cable segment depending on the switch series used.
1 Gbps Fibre Channel via 9 micron single-mode fiber optic cable and very long distance GBICs. Up X
to 100 kilometers per cable segment.
NOTE: See the HP Enterprise Backup Solutions Compatibility Matrix and the HP StorageWorks SAN
design guide for updates regarding support of additional interconnect types.
NOTE: See the HP StorageWorks SAN design guide for maximum supported distances across SAN.
Depending on the total length across the SAN, backup and restore speeds may vary. The longer the total
length across the SAN, the more buffering is needed to stream data without performance impacts. For
some adapters, backup restore speeds will be slow across long connections.
96 Hardware setup
HP StorageWorks 4/8 San Switch and HP StorageWorks 4/16 San
Switch—file system full resolution
The HP StorageWorks 4/8 San Switch and the HP StorageWorks 4/16 San Switch have a 118 MB root file
system that is typically over 80 percent utilized. If an issue on the switch results in a core dump, the root file
system can become 100 percent filled, causing erratic switch behavior to occur.
The following command can be run on the switch to clear core files from the switch, thereby freeing space
on the root file system:
supportsave -R
18.2 GB 15K
ULTRA3 SCSI
18.2 GB 15K
ULTRA3 SCSI
18.2 GB 15K
ULTRA3 SCSI
1
18.2 GB 15K 18.2 GB 15K 18.2 GB 15K
ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI
2
18.2 GB 15K 18.2 GB 15K 18.2 GB 15K
ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI
1 18.2 GB 15K
ULTRA3 SCSI
18.2 GB 15K
18.2 GB 15K
ULTRA3 SCSI
18.2 GB 15K
18.2 GB 15K
ULTRA3 SCSI
18.2 GB 15K
ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI
2
18.2 GB 15K
ULTRA3 SCSI
18.2 GB 15K
ULTRA3 SCSI
18.2 GB 15K
ULTRA3 SCSI
18.2 GB 15K
ULTRA3 SCSI
18.2 GB 15K
ULTRA3 SCSI
18.2 GB 15K
ULTRA3 SCSI
3
3 5
18.2 GB 15K 18.2 GB 15K 18.2 GB 15K
ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI
2
18.2 GB 15K 18.2 GB 15K 18.2 GB 15K
ULTRA3 SCSI ULTRA3 SCSI ULTRA3 SCSI
3
San Switch 2/16
The MP router simplifies SAN design, implementation, and management through centralization and
consolidation, providing a seamless way to connect and scale across multiple SAN fabrics without the
complexity of merging them into a single large fabric. One of the benefits of SAN connectivity using the
NOTE: The MPR is only supported in EBS configurations for bridging SAN islands. Connecting a library
or host directly to an MPR is not supported.
NOTE: See the Fibre Channel HBA documentation for installation instructions for option boards.
NOTE: See the Getting the most performance from your HP StorageWorks Ultrium 960 tape drive white
paper located at http://h71028.www7.hp.com/ERC/downloads/5982-9971EN.pdf, under white papers.
98 Hardware setup
Third-party Fibre Channel HBAs
Third-party HBAs, such as the Emulex LPe12002, might be supported in order to allow connectivity to EBS
in SANs that include third-party disk arrays.
For a complete listing of supported servers and hardware, see the HP Enterprise Backup Solutions
Compatibility Matrix at http://www.hp.com/go/ebs.
Important tips
• Unless zoning is configured, all servers containing a P700m controller are connected to all tape drives
automatically.
• SAS tape libraries must be connected to a port on a SAS BL switch (they cannot be connected to a SAS
port on an MSA).
• Each SAS tape drive has a single SAS port. Redundancy is not supported, so the host end of the fanout
cable will only be connected to one port on one SAS BL switch. Either SAS BL switch in the c-Class
enclosure can be used. The corresponding switch port on the other SAS BL switch can be left open, or it
can be connected to other devices.
• To use all four channels of the 3Gb SAS BL switch port, use a SAS fanout cable, which has one
mini-SAS connector on the switch end and four mini-SAS connectors on the tape drive end. The
available SAS fanout cables are approved for use with the tape library or autoloader and the SAS BL
switch:
• AN975A - HP StorageWorks 2m External Mini-SAS to 4x1 Mini-SAS Cable Kit
• AN976A - HP StorageWorks 4m External Mini-SAS to 4x1 Mini-SAS Cable Kit
The following illustration is a representation of a fanout cable:
NOTE: See the HP Direct connect shared storage for HP BladeSystem solution deployment guide for
detailed configuration information.
NOTE: This sequence is for initial start up. After the fabric is up and running, the general rule is to boot
the online and nearline devices (disks and tapes and their controllers) before booting servers. It may be
necessary to reboot servers if online or nearline devices are rebooted without the server being rebooted.
3 Nearline storage (tape Wait for the tape library/VLS to fully initialize
library/VLS) (this can take as long as 20 minutes).
Increased security
The Fibre Channel fabric provides fast, reliable, and seamless information access within the SAN. Zoning
segments the fabric into zones that are comprised of selected storage devices, servers, and workstations.
Since zone members can only see other members in the same zone, access between computers and
storage can be controlled.
Optimized resources
Zoning helps to optimize IT resources in response to user demand and changing user profiles. It can be
used to logically consolidate equipment for convenience. Zoning fabric characteristics are the same as
other fabric services:
• Administration from any fabric switch
• Automatic, transparent distribution of zone definitions throughout the fabric—A single failure cannot
interrupt zoning enforcement to other SAN connections.
• Automatic service scaling with fabric size—There is no requirement to upgrade systems as switches are
added and connectivity increases.
• Automatic, transparent deployment—There is no requirement for human intervention unless the zoning
specification must change.
Component Description
Zone configuration A set of zones. When zoning is enabled, one zone configuration is in effect.
Zone A set of devices that access one another. All computers, storage, and other devices
connected to a fabric can be configured into one or more zones.
104 Zoning
4 Configuration and operating system details
Basic storage domain configurations
The basic EBS storage domain can be configured in many different ways. It can be a small configuration
with direct attached devices as in direct attached SCSI and direct attached fibre or it can consist of a
heterogeneous connection of multiple HP PA-RISC servers, HP IA-64 servers, HP AlphaServers, HP ProLiant
servers, HP ProLiant Storage Servers, Sun Solaris servers, IBM AIX servers, and other third-party servers
sharing multiple libraries and RAID array storage systems. Refer to the HP Enterprise Backup Solutions
Compatibility Matrix located at: http://www.hp.com/go/ebs.
Figure 47 While some operating systems found in enterprise data centers might not be supported on the
storage network by EBS, it is still possible to back up these servers as clients over the LAN and still be
supported. See the ISV compatibility matrix for more information.
2
5
3
2
5
3
NOTE: While adding tape and or library devices to a server without a system reboot might work
intermittently, it is not recommended nor supported. A reboot is required in order to properly create the
device files.
NOTE: See the router user guide for instructions on Ethernet connectivity.
4. Verify that the minimum supported firmware level is installed. The firmware level is listed on the main
router Visual Manager user interface page in the PLATFORM section.
5. Verify that all tape and robotic devices in the tape library are recognized by the router. In the router
Visual Manager user interface, select Discovery from the main menu to display all devices recognized
by each Fibre Channel (FC) module and SCSI module in the router.
6. Verify the router is logged into the Fibre Channel switch. Ensure that the router logs into the switch as an
F-Port. This can be done by running a telnet session to the switch or browsing to the switch with a web
browser.
7. Set up selective storage presentation by using the FC port map settings. These maps allow the ability to
selectively present tape and robotic devices to hosts on the SAN. See chapter 2 for additional
information on mapping. Also refer to the FC interface controller user guide or Network Storage Router
user guide for complete instructions on creating maps for presenting to specific hosts.
At this point in the procedure:
a. The tape library is online and properly configured on the router with all devices showing as
mapped or attached.
8. After setting up the router, re-verify connectivity and performance using HP StorageWorks Library and
Tape Tools (L&TT).
Rogue applications
Rogue applications is a category of software products commonly found in SAN environments that can
interfere with the normal functioning of backup and restore operations. Rogue applications include system
management agents and monitoring software and a wide range of tape drive and system configuration
utilities. A list of known rogue applications and the operating systems on which they are found is shown
below. This list is not intended to be an exhaustive list.
These applications, utilities, and commands have been shown to interfere with components in the data
path and, when run concurrently with backup or restore operations, have the potential to cause job failures
or corrupted data. For example, HBA utilities such as SAN Surfer and HBAnywhere provide the ability to
reset the Fibre Channel port(s); utilities such as HP Library and Tape Tools allow for complete device
testing, device resets, and firmware upgrades; and management agents and utilities such as HP Systems
Insight Manager and SUN Explorer poll tape devices and may cause contention for device access.
Some specific recommendations for dealing with rogue applications are listed here:
• SCSI Reserve & Release—If your backup application supports the use of SCSI reserve and release,
enable and use it. Reserve and release can prevent unwanted applications or commands from taking
control of a device.
• SAN Zoning—EBS recommends host-based SAN switch zoning. When zoning is employed, rogue
applications are much less likely to interfere with tape device operation.
• SUN Explorer—This is an optional utility that can be installed as part of the Solaris install. When
installed, Explorer runs from a cron job and queries all attached peripheral devices, including tape
devices. HP recommends that the crontab entry for Explorer be edited to allow the utility to run at times
that do not coincide with system backups. Disable the tape module of Explorer from running by
modifying the file:
/etc/opt/SUNWexplo/default/explorer
Locate the EXP_WHICH variable and modify it as follows:
EXP_WHICH=”default,!tape”
The modules that Explorer runs are found in /opt/SUNWexplo/tools. To prevent Explorer from
running a module, add it to EXP_WHICH preceded with an exclamation point (!).
• HP Systems Insight Manager—Make sure that the latest version of the Insight Manager agents are
installed on the system. Tape drive-friendly changes to the manner in which devices are polled have
been implemented (post version 7.1).
HP-UX
The configuration process for HP-UX involves:
• Upgrading essential EBS hardware components to meet the minimum firmware and device driver
requirements.
NOTE: See the HP Enterprise Backup Solutions Compatibility Matrix for all current and required
hardware, software, firmware, and device driver versions, including Hardware Enablement Kits and
Quality Packs on the HP website:
http://www.hp.com/go/ebs
• Installing the minimum patch level support. Go to the following website to obtain the necessary
patches:
http://www.hp.com/support
NOTE: See the installation checklist at the end of this section to ensure all of the hardware and
software is correctly installed and configured in the SAN.
NOTE: QMH2462 adapter support will not be listed using the swlist utility; however, the current
FibrChanl-01 bundle does support the adapter.
2. The drivers stape, sctl, and schgr must all be installed in the kernel. To see if these drivers are installed,
enter the following command:
# /usr/sbin/kcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed:
Module State Cause
schgr static explicit
sctl static depend
stape unused
If one or more of the above drivers is in the unused state, they must be installed in the kernel. If they are
all installed (static state), proceed to the next section, “Configuring the SAN.”
3. Use kcmodule to install modules in the kernel, for example to install the stape module do the following:
# /usr/sbin/kcmodule stape=static
Enter Yes to backup the current kernel configuration file and initiate the new kernel build.
4. Reboot the server to activate the new kernel.
# cd /
# /usr/sbin/shutdown -r now
The HP-UX 11iv2 Quality Pack (QPK1123) December 2007 (B.11.23.0712.070a) and Hardware Enablement
Pack (HWEable11i) June 2007 (B.11.23.0712.070) contain required software bundles. These patches and
installation instructions are provided at the HP website
http://www.itrc.hp.com
NOTE: The QMH2462 and LPe1105 adapters support will not be listed using the swlist utility;
however, the current FibrChanl-01 and FibrChanl-02 bundles do support the adapters.
2. The drivers stape, sctl, and schgr must all be installed in the kernel. To see if these drivers are installed,
enter the following command:
# /usr/sbin/kcmodule schgr sctl stape
The following example shows output from kcmodule where the stape driver is not installed:
Module State Cause
schgr static explicit
sctl static depend
stape unused
If one or more of the above drivers is in the unused state, they must be installed in the kernel. If they are
all installed (static state), proceed to the next section, “Configuring the SAN.”
3. Use kcmodule to install modules in the kernel, for example to install the stape module do the following:
# /usr/sbin/kcmodule stape=static
Enter Yes to backup the current kernel configuration file and initiate the new kernel build.
4. Reboot the server to activate the new kernel.
# cd /
# /usr/sbin/shutdown -r now
The HP-UX 11iv3 Quality Pack (QPKBASE) September 2007 (B.11.31.0709.312a) and Hardware
Enablement Pack (HWEable11i) September 2007 (B.11.31.0709.312) contain required software bundles.
These patches and installation instructions are provided at the HP website:
http://www.itrc.hp.com
2 GB 32768
3 GB 65536
> 3 GB 131072
To determine the current value of vx_ninode, run the following at the shell prompt:
# /usr/sbin/kctune vx_ninode
To set vx_ninode to 32768, run the following command at the shell prompt:
# /usr/sbin/kctune vx_ninode=32768
NOTE: The kernel tunable parameters filecache_min and filecache_max control the amount of
physical memory that can be used for caching file data during system I/O operations. By default, these
parameters are automatically determined by the system to better balance the memory usage among file
system I/O intensive processes and other types of processes. The values of these parameters can be
lowered to allow a larger percentage of memory to be used for purposes other than file system I/O
caching. Determining whether or not to modify these parameters depends on the nature of the applications
running on the system.
NOTE: Some data protection products might not currently support HP-UX 11.31-persistent DSFs for tape.
See the data protection product documentation for more information.
NOTE: See the HP Enterprise Backup Solutions Compatibility Matrix for all current and required
hardware, software, firmware, and device driver versions at http://www.hp.com/go/ebs.
See the “Installation Checklist” at the end of this section to ensure proper installation and configuration of
the hardware and software in the SAN.
CAUTION: Failure to upgrade the Storport storage driver prior to installing the HBA mini-port driver may
result in system instability.
0 1 2 Busy or off-line
Note that Drive 3 and Drive 4 have different Windows device names.
NOTE: Some vendor applications use device serialization and are not affected by LUN shifting.
Interop issues with Microsoft Windows persistent binding for tape LUNs
Windows Server 2003 provides the ability to enable persistence of symbolic names assigned to tape LUNs
by manually editing the Windows registry. Symbolic name persistence means that tape devices will be
assigned the same symbolic name across reboot cycles, regardless of the order in which the operating
system actually discovers the device. This feature was originally released by Microsoft as a stand-alone
patch and was later incorporated into SP1 (see http://www.microsoft.com/ and search for KB873337 for
details). The persistence registry key is as follows:
H_Key_Local_Machine\System\CurrentControlSet\Control\Tape\Persistence
Persistence=1 symbolic tape names are persistent Persistence=0
non-persistent
Persistence is disabled by default. When you enable persistence, symbolic tape names (also referred to as
logical tape handles) change significantly. For example, \\.\Tape0 becomes \\.\Tape2147483646 .
The new symbolic tape name is not configurable. Some applications are unable to correctly recognize and
configure devices that have these longer persistent symbolic names. Applications known to have issues with
this device naming convention are all versions of HP Library and Tape Tools up to and including version
4.2 SR1a and EMC NetWorker v7.3 and later. HP L&TT is expected to release an updated version to
correct this issue later in 2007; EMC's patch schedule for NetWorker is unknown.
As a workaround, persistent binding of Fibre Channel port target IDs, enabled through the Fibre Channel
host bus adapter utilities (such as Emulex lputilnt, HBAnyware, and QLogic SAN Surfer) can provide some
benefit. Target ID binding assures that targets are presented in a consistent manner but cannot guarantee
consistent presentation of symbolic tape names.
IMPORTANT: Adding or removing tape drives from the system may cause an older driver inf file to be
re-read, which in turn can re-enable RSM polling. If tape drives are added or removed, check the registry
for proper configuration and, if necessary, repeat step 2 above.
CAUTION: Using the Registry Editor incorrectly can cause serious, system-wide problems. Microsoft
cannot guarantee that any problems resulting from the use of Registry Editor can be solved. Use this tool at
your own risk. Back up the registry before editing.
NOTE: Refer to the HP Enterprise Backup Solutions Compatibility Matrix for all current and required
hardware, software, firmware, and device driver versions at:
http://www.hp.com/go/ebs.
To ensure correct installation and configuration of the hardware, see ”Installation checklist” on page 115.
Backup software patch
Refer to your backup software vendor to determine if any updates or patches are required.
Configuring the SAN
This procedural overview provides the necessary steps to configure a Tru64 UNIX host into an EBS. Refer to
the documentation provided with each storage area network (SAN) component for additional component
setup and configuration information.
1. Prepare the required rack mounted hardware and cabling in accordance with the specifications listed
in backup software user guide as well as the installation and support documentation for each
component in the SAN.
NOTE: Loading Console firmware from the Console firmware CD may also update the host bus adapter
(HBA) firmware. This HBA firmware may or may not be the minimum supported by EBS. Refer to the HP
Enterprise Backup Solutions Compatibility Matrix for minimum supported HBA firmware revisions.
NOTE: HBA firmware can be upgraded before or after installing Tru64 UNIX. The driver will be installed
after Tru64 UNIX is installed. Contact Global Services to obtain the most current HBA firmware and drivers.
NOTE: See the HP StorageWorks Enterprise Backup Solutions Compatibility Matrix for all current and
required hardware, software, firmware, and device driver versions at: http://www.hp.com/go/ebs.
NOTE: Step 7 of the above procedure was introduced to eliminate the need to have hp_rescan -a run
as part of /etc/rc.local (or some other boot script). In previous versions of the driver kit, executing the
hp_rescan utility was necessary to work around an intermittent issue with device discovery of SCSI-2 tape
automation products. Executing the pbl script inserts the probe-luns utility into the boot sequence and
identifies and adds SCSI-2 device strings for legacy tape products into the kernel's blacklist. The result is
that all of the supported tape libraries and drives should be discovered correctly without any additional
steps by the user.
9. Verify that the host has successfully discovered all tape drive and library robotic devices using one of
the following methods:
• Review the device listing in /proc/scsi/scsi
• Review the output from the hp_rescan command
HP's fibre utilities, located in the /opt/hp/hp_fibreutils directory, are installed as part of the
driver kit and include the following:
Installation checklist
To ensure that all components on the SAN are configured properly, review the following questions:
• Are all hardware components at the minimum supported firmware revision, including: server, HBA,
Fibre Channel switch, interface controller, Interface Manager, CommandView TL, tape drives, library
robot?
• Are there any required Linux operating system patches missing (required patches are noted on the EBS
Compatibility Matrix)?
• Is the supported HBA driver loaded on the host?
• Are all tape and robotic devices mapped, configured, and presented to the host from the interface
controller or Interface Manager?
• Is the tape library online?
• Is the FC-attached tape drive logged into the Fibre Channel switch (F-port)?
• Is the interface controller logged into the Fibre Channel switch (F-port)?
• Is the host HBA correctly logged into the Fibre Channel switch (F-port)?
• If multiple Fibre Channel switches are cascaded or meshed, are all ISL ports correctly logged in?
• If using zoning on the Fibre Channel switch, is the interface controller, or tape drive, configured into the
same switch zone as the host (either by WWPN or by switch port number)?
• If using zoning on the Fibre Channel switch, has the host's zone been added to the active switch
configuration?
NOTE: Refer to Customer Advisory c00788781 for additional details on the new driver kits and their
associated installation procedure changes.
The scope of this issue includes any EBS configuration that uses a backup application which does not
implement SCSI Reserve and Release and contains at least one Linux host which has shared access to tape
devices. Backup applications known to be affected are HP Data Protector (all versions) and Legato
NetWorker prior to v7.3.
The only recommended work-around for affected applications is to not reboot Linux servers while other
hosts are running backups.
Sparse files causing long backup times with some backup applications
Some Integrity and X64 64-bit HP Servers running the Red Hat Enterprise Linux 3 operating system (or
later) may have longer than expected system backup times or appear to be stalled when backing up the
following file:
/var/log/lastlog
This file is known as a "sparse file." The sparse file may appear to be over a terabyte in size and the
backup software will take a long time to back up this file. Most backup software applications have the
capability to handle sparse files with special sparse command flags. An example of this is the "tar" utility,
which has the "-sparse" or "-S" flag that can be used with sparse files.
If your backup application does not include support for backing up sparse files, then /var/log/lastlog
should be excluded from the backup.
NOTE: This setting only applies to the Compaq Fibre Channel host bus adapter. The FCA-2214 host bus
adapter for NetWare does not require this setting and will not operate correctly if the Force FCP Response
Code bit is enabled.
NOTE: All third-party backup applications may not be supported on all hardware. Refer to the HP
Enterprise Backup Solutions Compatibility Matrix at http://www.hp.com/go/ebs.
.
When installing the FCA2214 host bus adapter, the following load line and option settings are included in
the server's STARTUP.NCF file:
LOAD QL2300.HAM SLOT=x /LUNS /PORTNAMES /ALLPATHS [/MAXLUNS=x]
• Where SLOT specifies the PCI slot in which the adapter is installed.
• /LUNS directs NetWare to scan for all LUNs during the load of this driver instance. Without this
parameter, NetWare will only scan for LUN 0 devices. The scanned LUN number range is 0 to (n - 1)
where n is specified by the /MAXLUNS=n option. By default, this value is set to 32.
• /PORTNAMES causes NetWare to internally track devices by Fibre Channel port name rather than
node name. This parameter is required when storage LUNs do not have a 1:1 correspondence across
port names.
• /ALLPATHS disables native failover and reports all devices on all adapter paths back to the operating
system.
NOTE: Refer to the HP Enterprise Backup Solutions Compatibility Matrix for all current and required
hardware, software, firmware, and device driver versions at: http://www.hp.com/go/ebs.
See ”Installation checklist” on page 137 to ensure that the hardware and software in the SAN is correctly
installed and configured.
Configuring the SAN
This procedural overview provides the necessary steps to configure a Sun Solaris host into an EBS. See the
documentation provided with each storage area network (SAN) component for additional component
setup and configuration information.
Currently supported adapters for Sun Solaris include Sun, QLogic, and Emulex-branded HBAs. HP
StorageWorks EBS supports all 4Gb and 8Gb HBAs with the Sun native driver. For some models of 2Gb
HBAs, the QLogic qla and Emulex lpfc drivers are supported.
Device binding can help resolve issues where device targets shift. Issues can arise when a given target or
LUN changes number. In most cases, this can be controlled through the use of good zoning or persistent
binding. When using QLogic or Emulex drivers, configuring for persistent binding is recommended. For the
Sun native driver, persistent binding is not necessary unless recommended by the backup application
vendor or for an environment where tape devices will be visible across multiple hosts.
For configuring persistent binding with the Sun native driver, see the Sun document Solaris SAN
Configuration and Multipathing Guide at http://docs.sun.com/app/docs/doc/820-1931.
Sun native driver configuration
1. Prepare the required rack mounted hardware and cabling in accordance with the specifications listed
in backup software user guide as well as the installation and support documentation for each
component in the SAN.
2. For Solaris 9, download the current Sun StorEdge SAN Foundation Software (SFS) from
http://www.sun.com/storage/san. Select the following files for download:
• Install_it Script SAN 4.4.x (SAN_4.4.x_install_it.tar.Z)
• Install_it Script SAN 4.4.x Readme (README_install_it.txt)
The README document explains how to uncompress the downloaded file and execute the Install_it
Script.
3. SFS functionality is included within the Solaris 10 operating system. The Sun native SUNWqlc driver is
included with Solaris 10. For Solaris 10 01/06 or later release, SUNWemlxs and SUNWemlxu driver
packages are included. To obtain SUNWemlx packages, go to Sun’s Products Download page at
http://www.sun.com. Search for “StorageTek Enterprise Emulex Host Bus Adapter Device Driver.”
Install the appropriate patch:
• SUNWqlc on Solaris 10 SPARC, install patch 119130-33 or later
• SUNWqlc on Solaris 10 x86/64, install 119131-33 or later
• SUNWemlx on Solaris 10 SPARC, install patch 120222-31 or later
• SUNWemlx on Solaris 10 x86/64, install patch 120223-31 or later
4. Update the HBA fcode if needed using the flash-upgrade utility included in the appropriate patch.
• SG-XPCI1FC-QF2 (X6767A) and SG-XPCI2FC-QL2 Patch 114873-05 or later
• SG-XPCI2FC-QF2 (X6768A) and SG-XPCI2FC-QF2-Z Patch 114874-07 or later
• SG-XPCI1FC-EM2 and SG-XPCI2FC-EM2 Patch 121773-04 or later
• SG-XPCI1FC-QF4 (QLA2460) and SG-XPCI2FC-QF4 (QLA2462) Patch 123305-04 or later
5. Reboot the server with -r option:
#reboot -- -r
6. Use the cfgadm utility to show the HBA devices:
#cfgadm -al
7. Use the cfgadm utility to configure the HBA devices. “c2” is the HBA device in this example.
#cfgadm -c configure c2
8. Use devfsadm utility to create device files:
#devfsadm
Troubleshooting with the cfgadm utility
• Getting the status of FC devices using cfgadm:
# cfgadm -al
Example output for above command:
Ap_Id Type Receptacle Occupant Condition
c3 fc-fabric connected configured unknown
c3::100000e002229fa9 med-changer connected configured unknown
c3::100000e002429fa9 tape connected configured unknown
c3::50060e80034fc200 disk connected configured unknown
c4 fc-fabric connected configured unknown
c4::100000e0022286ec tape connected configured unknown
c4::100000e0024286ec tape connected configured unknown
c4::50060e80034fc210 disk connected configured unknown
This output shows a media changer at LUN 0 for the 100000e0022229fa9 world wide name, and
tape and disk devices at LUN 0 for other world wide names. The devices are connected and have been
configured and are ready for use. “cfgadm -al -o show_FCP_dev” can be used to show the devices
for all LUNs of each Ap_Id.
• Fixing a device with an “unusable” condition:
If the condition field of a device in the cfgadm output is “unusable,” then the device is in a state such
that the server cannot use the device. This may have been caused by a hardware issue. In this case, do
the following to resolve the issue:
1. Resolve the hardware issue so the device is available to the server.
2. After the hardware issue has been resolved, use the cfgadm utility to verify device status and to
mend the status if necessary:
3. After installing the HBA, install the device driver. The driver comes with the HBA or can be obtained
from http://www.qlogic.com.
4. To ensure that no previous device driver was installed, at the prompt, type:
#pkginfo | grep QLA2300
If no driver is installed, a prompt is returned. If there is a driver installed, verify that it is the correct
revision by entering:
#pkginfo -l QLA2300
If the driver needs to be removed, enter:
#pkgrm <package name>
5. Install the new driver. Navigate to the directory where the driver package is located and at the prompt,
type:
#pkgadd -d ./<package name>
6. Make sure that the driver is installed. At the prompt, type:
#pkginfo -l QLA2300
7. Look at /kernel/drv/qla2300.conf (the device configuration file) to make sure the configuration is
appropriate.
Fibre Channel tape support is enabled. An example follows:
hba0-fc-tape=1;
Persistent binding can be configured by binding SCSI target IDs to the Fibre Channel world wide port
name of the router or tape device. To set up persistent binding, the persistent binding only option is
enabled. An example follows.
hba0-persistent-binding-configuration=1;
2. After installing the HBA, verify proper hardware installation. At the OpenBoot PROM ok prompt, type:
show-devs
If the HBA installed correctly, devices similar to the following will be displayed (the path will vary
slightly depending on your configuration).
/pci@8,700000/fibre-channel@1,1
/pci@8,700000/fibre-channel@1
Verify the HBA hardware installation in Solaris at the shell prompt by typing:
prtconf -v | grep fibre-channel
If the HBA is installed correctly, devices similar to the following are displayed:
fibre-channel (driver not attached)
fibre-channel (driver not attached)
3. Install the HBA device driver. The driver can be obtained from http://www.emulex.com.
4. To ensure that no previous device driver was installed, at the prompt, type:
#pkginfo -l lpfc
If no driver is loaded, a prompt is returned. If there is a driver installed, verify that it is the correct
revision. If the driver removal is required, enter:
#pkgrm <package name>
5. Install the new driver. Navigate to one directory level above where the driver package directory is
located and at the prompt, type:
#pkgadd -d .
Select the lpfc package.
6. Make sure that the driver is installed. At the prompt, type:
#pkginfo -l lpfc
7. Verify the HBA driver attached by typing:
#prtconf -v | grep fibre-channel
If the driver attached, devices similar to the following are displayed:
fibre-channel, instance #0
fibre-channel, instance #1
8. Look at /kernel/drv/lpfc.conf (the device configuration file) to make sure the configuration is
appropriate.
For World Wide Port Name binding, add the following line:
fcp-bind-method=2;
For FCP persistent binding, the setting fcp-bind-WWPN binds a specific World Wide Port Name to a
target ID. The following example shows two NSR FC ports zoned in to the second interface on the HBA:
WWPN SCSI ID
fcp-bind-WWPN="100000e0022286dd:lpfc1t62",
"100000e002225053:lpfc1t63";
NOTE: Refer to comments within the lpfc.conf for more details on syntax when setting
fcp-bind-WWPN. Add the following to item 2 within section “Configuring Sun Servers for tape
devices on SAN”:
For LP10000 adapter:
name="st" class="scsi" target=62 lun=0;
name="st" class="scsi" target=62 lun=1;
name="st" class="scsi" target=62 lun=2;
name="st" class="scsi" target=62 lun=3;
NOTE: The information in the following examples, such as target IDs, paths, and LUNs, are examples
only. The specific data for your configuration may vary.
NOTE: This section applies to Solaris 9 and Solaris 10 prior to Update 5 (05/08). Configuration of the
st.conf file is no longer required with Solaris 10 Update 5 (05/08) or later. Tape devices will be
discovered automatically after a reboot.
1. Edit the st.conf file for the type of devices to be used and also for binding. The st.conf file should
already reside in the /kernel/drv directory. Many of the lines in the st.conf file are commented out.
To turn on the proper tape devices, uncomment or insert the appropriate lines in the file.
tape-config-list=
“COMPAQ DLT8000”, “Compaq DLT8000”,”DLT8k-data”,
“COMPAQ SuperDLT1”,”Compaq SuperDLT”,”SDLT-data”,
“COMPAQ SDLT320”, “Compaq SuperDLT 2", “SDLT320-data”,
“HP SDLT600”, ”HP SDLT600”, ”SDLT600-data”,
“HP Ultrium 4-SCSI”, ”HP Ultrium LTO 4”, ”LTO4-data”,
“HP Ultrium 3-SCSI”, ”HP Ultrium LTO 3”, ”LTO3-data”,
“HP Ultrium 2-SCSI”, ”HP Ultrium LTO 2”, ”LTO2-data”,
“HP Ultrium 1-SCSI”, ”HP Ultrium LTO 1”, ”LTO1-data”;
NOTE: The tape-config list is composed of a group of triplets. A triplet is composed of the Vendor ID +
Product ID, pretty print, and the data property name. The syntax is very important. There must be eight
characters for the vendor ID (COMPAQ or HP) before the product ID (DLT8000, SDLT600, Ultrium, etc). In
the above line, there are exactly two spaces between “COMPAQ” and “DLT8000”, and there are exactly
six spaces between “HP” and “Ultrium”. The order of the triplets is also important for Ultrium tape drives for
discovery. The pretty print value will be displayed in the boot log /var/adm/messages for each tape drive
discovered that matches the associated vendor ID + product ID string.
Some data protection applications handle the SCSI reservation of the tape drives and others require the
operating system to do so. For a complete description of setting SCSI reservation, see the options bit
flag ST_NO_RESERVE_RELEASE on the man page for “st”.
The ST_NO_RESERVE_RELEASE flag is part of the fourth parameter in the data property name. For
LTO1-data and LTO2-data, a value of 0x9639 means the operating system handles reserve/release and
a value of 0x29639 means the application handles reserve/release. For LTO3-data and LTO-4 data, a
value of 0x18659 means the operating system handles reserve/release and a value of 0x38659
means the application handles reserve/release.
2. Define tape devices for other adapters by adding lines similar to the following to the SCSI target
definition section of the st.conf file.
Example for QLogic adapters:
name=”st” class=”scsi” parent=”/pci@1f,4000/QLGC,qla@1”
target=64 lun=0;
name=”st” class=”scsi” parent=”/pci@1f,4000/QLGC,qla@1”
target=64 lun=1;
NOTE: The parent is the location of the HBA in the /devices directory.
NOTE: The target can be chosen; however, it must not conflict with other target bindings in the st.conf and
sd.conf files.
3. Perform a reconfiguration reboot (reboot -- -r) on the server and verify that the new tape devices
are seen in /dev/rmt.
Configuring switch zoning
If zoning will be used, either by World Wide Name or by port, perform the setup after the HBA has
logged into the fabric. Refer to the Fibre Channel switch documentation for complete switch zone setup
information. Ensure that the World Wide Name (WWN) or port of the Fibre-Channel-to-SCSI bridge is in
the same zone as the WWN or port of the HBA installed in the server.
Installation checklist
To ensure that all components on the SAN are logged in and configured properly, review the following
questions:
• Are all hardware components at the minimum supported firmware revision, including: Server, HBA,
Fibre Channel switch, Fibre Channel to SCSI router, Interface Manager, Command View TL, tape drives,
library robot?
• Are all recommended Solaris patches installed on the host?
• Is the minimum supported HBA driver loaded on the host?
• Are all tape and robotic devices mapped, configured and presented to the host from the Fibre Channel
to SCSI router, or Interface Manager?
• Is the tape library online?
• Is the Fibre Channel to SCSI router correctly logged into the Fibre Channel switch?
• Is the host HBA correctly logged into the Fibre Channel switch?
NOTE: Refer to the HP Enterprise Backup Solutions Compatibility Matrix for all current and required
hardware, software, firmware, and device driver versions at http://www.hp.com/go/ebs.
Refer to the Quick Checklist at the end of this section to ensure proper installation and configuration of all
of the hardware and software in the SAN.
Configuring the SAN
This procedural overview provides the necessary steps to configure an AIX host into an EBS. Refer to the
documentation provided with each storage area network (SAN) component for additional component
setup and configuration information.
Prepare the required hardware and cabling in accordance with the specifications listed in chapter 2 of this
guide as well as the installation and support documentation for each component in the SAN.
IBM 6228, 6239, 5716, or 5759 HBA configuration
NOTE: See the EBS compatibility matrix concerning IBM AIX OS version support for these Host Bus
Adapters.
1. Install the latest maintenance packages for your version of AIX. This ensures that the latest drivers for the
6228/6239/5716/5759/5773/5774 HBA are installed on your system. For AIX 4.3.3, the latest
packages must be installed because the base OS does not contain drivers for the newer HBAs.
2. Install the IBM 6228/6239/5716/5759/5773/5774 HBA, and restart the server.
3. Ensure that the card is recognized. At the prompt, type:
#lsdev -Cc adapter
There is a line in the output similar to the following:
fcs0 Available 1D-08 FC Adapter
If the adapter is not recognized, check that the correct HBA driver is installed:
6228: #lslpp -L|grep devices.pci.df1000f7
6239: #lslpp -L|grep devices.pci.df1080f9
5716: #lslpp -L|grep devices.pci.df1000fa
5759: #lslpp -L|grep devices.pci.df1000fd
5773: #lslpp -L|grep devices.pciex.df1000fe
5774: #lslpp -L|grep devices.pciex.df1000fe
There are lines in the output for lslpp similar to the following for a 6239 HBA:
devices.pci.df1080f9.diag 5.1.0.1 C F PCI-X FC Adapter Device
devices.pci.df1080f9.rte 5.1.0.1 C F PCI-X FC Adapter Device
Part Number.................00P4295
EC Level....................A
Serial Number...............1E3180B22A
Manufacturer................001E
FRU Number..................00P4297
Device Specific.(ZM)........3
Network Address.............10000000C9345CF9
ROS Level and ID............02E01871
Device Specific.(Z0)........2003806D
Device Specific.(Z1)........00000000
Device Specific.(Z2)........00000000
Device Specific.(Z3)........03000909
Device Specific.(Z4)........FF601231
Device Specific.(Z5)........02E01871
Device Specific.(Z6)........06631871
Device Specific.(Z7)........07631871
Device Specific.(Z8)........20000000C9345CF9
Device Specific.(Z9)........HS1.81X1
Device Specific.(ZA)........H1D1.81X1
Device Specific.(ZB)........H2D1.81X1
Device Specific.(YL)........U0.1-P2-I2/Q1
5. After the HBA has successfully logged into the SAN fabric and the necessary zoning is configured,
configure the HBA and devices within the fabric. At the prompt, type:
#cfgmgr -1 <devicename> -v
Within the command, <devicename> is the name from the output of the lsdev command in step 3,
such as fcs0.
6. To ensure all tape device files are available, at the prompt, type:
#lsdev -HCc tape
7. By default, AIX creates tape devices with a fixed block length. To change the devices to have variable
block lengths, at the prompt, type:
#chdev -1 <tapedevice> -a block_size=0
Configuration of the tape devices (where tape devices are rmt0, rmt1, and so on) are complete.
NOTE: HP tape drives (SDLT and LTO) use the IBM host tape driver. When properly configured, a device
listing will show the tape device as follows:
Application data Yes (cold) Yes (hot/cold) Yes (cold) Yes (hot/cold) VMware = Yes
to backup (VCB)
Hyper-V = Yes
(VSS)
HPVM = Yes
(ZDB)
Large number of Not suggested Not suggested Not suggested Not suggested Yes
VMs to backup
NOTE: ESX server and VMs do not support an FC or iSCSI connected tape device. A proxy server can be
used to manage SAN or iSCSI devices.
• VMware Consolidated Backup (VCB) offloads backup responsibility from ESX servers to a dedicated
backup proxy (or proxies). This reduces the load on ESX servers. VCB provides full-image backup and
restore capabilities for all virtual machines and file-based backups for virtual machines running the
Microsoft Windows operating systems.
• For complete details on Zero Downtime Backup of an Oracle database running on a VMware virtual
machine, see the HP StorageWorks Oracle on VMware ZDB Solution implementation guides at
www.hp.com/go/ebs, under the EBS whitepapers link.
NOTE: VMware datastores residing on HP StorageWorks EVA storage arrays should use the "Windows
host profile mode" for the VCB proxy server.
HPVM is an application installed on an HP-UX server and allows multiple, unmodified operating systems
(HP-UX, Windows and Linux) and their applications to run in virtual machines that share physical
resources.
The HP Virtual Server Environment (VSE) for HP Integrity provides an automated infrastructure that can
adapt in seconds with mission-critical reliability. HP VSE allows you to optimize server utilization in real
time by creating virtual servers that can automatically grow and shrink based on business priorities and
service.
NOTE: The HP Integrity VM host and VMs do support FC SAN connected tape and Virtual Library
Systems (VLS) devices.
• Off-host backups using HP storage array hardware mirroring or snapshots can be used to shorten the
backup windows and off-load resources require for backup.
• VMs can also be setup for LAN backup the same as a regular client or media host. See backup
software documentation for details.
• For complete details on Virtual Machine backup and recovery including Off-host, LAN-based and local
media server backups, see HP StorageWorks EBS Solutions Guide for HP Integrity Virtual Machine
Backup at www.hp.com/go/ebs under the EBS whitepapers link.
Features:
• Delivers fault monitoring, inventory reporting, and configuration management for ProLiant, Integrity,
and HP 9000 systems as well as HP StorageWorks MSA, EVA, XP arrays and various third-party arrays
via a web-based GUI or command line.
• Provides base-level management of HP clients and printers. Can be extended with HP client
management software and HP Web JetAdmin for more advanced management capabilities.
• Delivers notification of and automates response to pre-failure or failure conditions through automated
event handling.
• Facilitates secure, scheduled execution of OS commands, batch files, and custom or off-the-shelf
applications across groups of Windows, Linux, or HP-UX systems.
• Enables centralized updates of BIOS, drivers, and agents across multiple ProLiant servers with system
software version control.
• Enables secure management through support for SSL, SSH, OS authentication, and role-based security.
• Installs on Windows, HP-UX, and Linux.
Key benefits:
HP Systems Insight Manager, HP's unified server-storage management tool, helps maximize IT staff
efficiency and hardware platform availability for small and large server deployments alike. It is designed
for end-user setup, and its modular architecture enables systems administrators to plug in additional
functionality as needed.
• Unified server and storage management
• Improves efficiency of the IT staff
• Extensibility through plug-in applications
• Integrate new technologies in response to changing conditions
Known issues
Inquiry commands report information such as make, model, and serial number, while Log Sense reports
other health statistics. Due to the multi-hosted nature of backups on SANs, these polling agents can cause
the tape controller or tape and robotic devices to become unstable due to the flooding of these commands
coming from all of the hosts that can see the devices. HP Fibre Channel interface controllers have an
inquiry caching feature that minimizes the impact of inquiry commands in backup/restore environments.
Log Sense commands can still cause issues on SAN backups as is the case with HP Systems Insight
Manager versions 6.4 and 7.0 and 7.1. Insight Manager uses a timeout for Log Sense commands that can
sometimes be exceeded in a normal backup environment. Side effects from this behavior may include a
robotic device to become unresponsive, poor performance, or a tape controller reboot.
Version 7.2 and later of the Insight Management agents will begin to use Inquiry commands for polling as
opposed to Log Sense commands. Utilities have also been made available for versions 7.0 and 7.1. The HP
Utility for Disabling Fibre Agent Tape Support can be used to allow these backup jobs to complete without
being overrun with Log Sense commands. This utility disables the Fibre Agent Tape support, which disables
the monitoring of the Fibre Attached Tape Library. Deploying this utility will disable only Fibre Attached
Tape Library monitoring, leaving the monitoring of all other devices and peripherals by the Storage Agents
unaffected. The HP Utility for Disabling Fibre Agent Tape Support is available in SoftPaq SP25792 at the
following URL:
ftp://ftp.compaq.com/pub/softpaq/sp25501-26000/SP25792.EXE
NOTE: In current versions of the HP management agents, there is also an option to disable Fibre Agent
Tape Support:
Recommendations
Be aware of the management applications that run in backup environments as they may issue commands
to the tape and robotic devices for polling or monitoring purposes, and adversely impact backup jobs in
progress. Sometimes these applications or agents can be running on the server as part of the installed
operating system. If these agents are running in the backup environment, and they are not needed, then
disable them. If they must be run, then limit the agent to 1 or 2 servers that see the nearline storage device.
Sometimes it is not necessary for the agents to poll nearline devices, as they are already monitored by the
backup application. In most cases, the backup application will remotely monitor the backup/restore
environment. Refer to your management agent software updates for more information on how to manage
nearline device polling.
FIPS compliance
The Federal Information Processing Standard (FIPS) 140-2 standards are the U.S. government standards for
the protection of cryptographic modules. Cryptographic modules are the elements of an encryption product
in which the algorithms that encrypt the data are maintained. There are currently four levels of security
defined in the standard–the higher the level, the more stringent the security.
EBS recommends that encryption products be certified at no lower than FIPS 140-2 Level 2, which requires
evidence of physical tampering of the cryptographic module and role-based authentication of users.
Role-based authentication allows for differing levels of security for user accounts and can also include
quorum-based security, in which a set number of users are required to validate the execution of a particular
task.
1 1
2 2
Tape devices can also be made visible to the operating system through multiple paths, but no automated
multi-path failover capability is currently available. Only a single path at a time can be used between any
host and tape device or tape library controller, including LTO4 drives, regardless of the number of FC ports
(see Figure 53).
2 2
Multi-path disk array configurations typically use a special driver on the host. This driver recognizes two
views of the same disk array, presents only one device image to the host, and provides an automatic
failover capability between them. This type of functionality does not exist for tape and would need to be
developed either in a device driver or within the backup application for a redundant-path capability to be
present.
Error recovery provides an additional hurdle. When a disk I/O such as a read or write to a logical block
fails, the host can simply retry the I/O. In this case, the device driver switches over to a different path when
the first attempt fails and the whole process is transparent to the application writing or reading the data.
NOTE: Detailed instructions on how to manually configure LUN maps within the library controllers are in
the Partitioning in an EBS Environment v.2 implementation guide, located under EBS Whitepapers &
Implementation Guides at http://www.hp.com/go/ebs.
2 2
It is possible to balance I/O across multiple paths or SANs to your tape library, provided there is only one
path to any single device.
2 2
NOTE: When connecting one tape library to multiple SANs, use controllers with multiple Fibre Channel
host ports, such as the NSR M2402 or the e2400-160. Connecting a tape drive or robotic device to more
than one controller is not supported.
HP-UX Secure Path (11i v1,11i v2), PV-LINKS (11i v1, 11i v2), Native Multipath (11i
v3)
Red Hat and SUSE Linux QLogic Native Multi-path, Emulex MultiPulse, HP Device Mapper
The sharing of disk and tape on the same pair of HBAs in a multi-path environment is fully supported with
the following exceptions/caveats:
• In Windows 2003 MSCS cluster environments, sharing of disk and tape on the same pair of HBAs
requires StorPort miniport drivers.
• For AIX environments running Data Protector, tape devices must be isolated to their own HBA port.
• Regardless of operating system environment, multi-path to tape is not supported (only one of the two
configured HBAs can access the tape devices).
Specific ISV application support is detailed in the HP Enterprise Backup Solutions Compatibility Matrix.
Any special support considerations will be footnoted.
Clustering
A Fibre Channel cluster in the EBS with data protection software consists of two servers and storage that
can be distributed between two or more distant computer rooms interconnected by Fibre Channel links. A
Fibre Channel cluster topology can be considered as an extension of a local SCSI Cluster where all the
parallel SCSI shared buses are replaced by extended serial SCSI shared buses using Fibre Channel
switches.
Highlights
• Communications between computers and storage units use the new high-speed Fibre Channel standard
to carry SCSI commands and data over fiber optic links.
• Storage cabinets contain Fibre Channel disks and Fibre Channel components to connect to the SAN.
Benefits
• Computers and storage can be located in different rooms, distance can expand up to 10 kms.
• High-availability — Full hardware redundancy to ensure that there is no single point of failure.
• Electrical insulation — The cluster can be split between two electrically independent areas.
Backup for cluster configurations may be deployed using either separate switches and HBAs or common
switches and HBAs. However, these configurations do not provide a failover path for tape or tape libraries.
To use separate switches, the configuration requires installing an additional HBA in each server, and a
separate switch, as shown in the following diagram. Again, this option provides better performance for
applications with large storage and/or short backup window requirements.
3 4
Figure 56 Cluster configuration with separate switches for disk and tape
1 1
3 4
Figure 57 Cluster configuration with a common HBA for disk and tape
NOTE: For Microsoft Windows 2000 and Windows 2003 using the SCSIport driver, Microsoft does not
recommend the sharing of disk and tape devices on the same Fibre Channel host bus adapter; however,
HP has tested and certified the sharing of disk and tape, in a Microsoft Cluster Server, with their supported
HBAs. See the HP Enterprise Backup Solutions Compatibility Matrix for a listing of Supported HBAs with
Windows 2000/2003. For Windows 2003 servers using the StorPort driver, the sharing of disk and tape
is supported by Microsoft and HP.
HP-UX MC/ServiceGuard
Backup for MC/ServiceGuard configurations may be deployed using standard backup software, such as
HP Data Protector or Symantec NetBackup without installing and configuring Advanced Tape Services
(ATS). In this case, the backup software instead of ATS provides all backup functionality including sharing
and failover. This is the only option for MC/SG configurations participating in a multi-cluster or
heterogeneous SAN environment.
1 1 1 1
The following steps are used to evaluate the throughput of a complex SAN infrastructure:
Hardware
Table 50 shows the operating systems and the specific HBAs used in each server.
Table 50 Hardware
HP Proliant DL380 G3 Red Hat Enterprise Linux 4.0 (32bit) FCA2214DC (2Gb/s)
HP Proliant DL380 G4 Red Hat Enterprise Linux 4.0 (64bit) QLE2462 (4Gb/s)
HP StorageWorks ESL E-series Tape Library with four Library - 4.10 2 at 4Gb/s
HP Ultrium 960 (LTO3) tape drives
LTO3 drives - L26W
Performance tools
HP provides free diagnostic tools for testing system performance. These tools isolate major component
bottlenecks and are available at the following websites:
• http://www.hp.com/support/tapetools
• http://www.hp.com/support/pat
HPCreateData This tool measures WRITE performance of a server to disk HP-UX, Windows, Linux, and
directly from memory independent of the tape drive. Solaris
HPCreate data also creates definable data of known size,
structure, and compressibility to enable backups and
restores to be easily benchmarked using a consistent set of
data.
HPReadData This tool measures the READ performance of a server from disk HP-UX, Windows, Linux, and
independent of the tape drive. Solaris
Dataset 2 XP / 30GB LUN / 10GB data Mixed file size 4K – 128MB / 1:1 RAID 5 (3+1)
Dataset 3 EVA / 30GB LUN / 10GB data Mixed file size 4K – 128MB / 1:1 RAID 5
Dataset 4 XP / 30GB LUN / 10GB data Mixed file size 4K – 128MB / 1:1 RAID 5 (7+1)
Dataset 5 XP and EVA / 30GB LUN / 10GB data 2G – or larger / 2:1 RAID 5
• Linux performed poorest when using a block size of 64Kb. (See Figure 61.)
• Different block sizes had a minimal effect on HP-UX and Windows. (See Figure 61.)
• The Windows Fibre Channel host bus adapter (HBA) configuration can impact performance. (See note
below.)
NOTE: During tests on a Windows server with an Emulex HBA it was determined that the default
Windows registry value MaximumSGList for Emulex HBAs was set too low. The
MaximumSGList value controls the size of the data transfer length which was 128 KB.
Increasing the value of MaximumSGList from 21 (hex) to 81 (hex) resulted in a data transfer
length of 512 KB and much better performance.
CAUTION: Editing the Windows Registry incorrectly can cause serious, system-wide problems.
Microsoft cannot guarantee that any problems resulting from editing the Registry can be solved.
Back up the registry before editing.
160.00
140.00
120.00
100.00
HP-UX 11.23
MB/s
60.00
40.00
20.00
0.00
32768 65536 131072
Block Size (bytes)
• Confirmed poor tape performance when using 64Kb block sizes on Linux. (See Figure 62.)
• Due to no buffering, tape READs performed 30-35% slower than tape WRITEs. (See Figure 63.)
Figure 62 Tape READ throughput
NOTE: The datasets created represent a cross section of user data with both uncompressed and
compressed data with different RAID levels utilized on the disk arrays, and different files sizes. See
(Table 53 on page 163.)
• File size and file count has a major impact on performance. (See Figure 64.)
Figure 64 shows how file systems with hundreds of thousands of small files spend an extraordinary
amount of time opening and closing the files while writing or reading from disk. The result is very poor
backup and restore performance with file sizes less than 512K. Windows has a much higher file system
overhead compared to HP-UX and Linux.
• Striped (RAID5) versus mirrored (RAID1) data on a disk array has mixed results. (See Figure 65.)
The impact of striped data versus mirrored data varied depending on the type of operation. Striped
data tended to perform better for disk reads which resulted in better backup performance; however
mirrored data tended to perform better for disk writes which resulted in better restore performance.
• XP Striped set RAID5 3+1P versus RAID5 7+1P showed 7+1P performed significantly better. (See
Figure 66.)
Figure 64 EVA RAID5 Write Throughput
NOTE: The datasets used in this test were created in ”3. Evaluate the disk subsystem’s WRITE
performance” on page 166, and are now being read from disk. (See Table 53 on page 163.)
Ultrium 160 2.2 GHz 1.6 GHz 1.4 GHz 1.1 GHz 512 40 27
960/1760
Ultrium 920 120 1.6 GHz 1.2 GHz 1 GHz 850 MHz 384 30 20
SDLT 600 72 1 GHz 733 MHz 650 MHz 525 MHz 256 18 12
Ultrium 460 60 850 MHz 633 MHz 550 MHz 433 MHz 224 15 10
Ultrium 448 48 700 MHz 500 MHz 433 MHz 350 MHz 192 12 8
Ultrium 232 32 475 MHz 350 MHz 300 MHz 233 MHz 128 8 6
• SAN bandwidth:
• Does the SAN have enough bandwidth to move the data?
SAN switches have tools for measuring performance. These tools can be used to ensure that the SAN
has the needed bandwidth.
• Disk subsystem limitations:
Perhaps the disk subsystem is too busy or not configured optimally. See the disk subsystem
documentation for methods of improving performance.
• Multi-streaming:
This chapter did not include tests that send multiple data streams in parallel to single or multiple tape
drives. Multi-streaming can significantly improve backup performance, provided that the server, SAN
infrastructure, tape devices, and disk subsystem can support the streams.
NOTE: Frequent firmware image updates are released on the Internet. For optimal performance, HP
recommends updating your system periodically with the latest device firmware.
NOTE: HP recommends performing diagnostic tests on your tape drive before requesting a replacement.
NOTE: This chapter does not replace the detailed documentation that is available for L&TT
troubleshooting.
NOTE: This test does not write to the tape, so the contents of the tape are safe. Setting the tape to
write-protect is an option, but not necessary.
Windows
Installing L&TT
• Find the latest version of L&TT for your OS at www.hp.com/support/tapetools.
• Run the install package (note upgrading from 4.0 and higher will automatically uninstall the previous
version).
• L&TT is ready to run.
Running L&TT
• Run the L&TT executable and follow the device scan messages.
• Select the device and verify that L&TT can locate it.
Checking and updating FW revision
Check if latest firmware is on the drive:
1. Select drive in L&TT.
2. Select the FW button.
3. Select the Local Firmware Files tab.
4. Select Get Files from Web button.
5. Check to ensure that your firmware is up to date (in all local devices indicated).
If not, load the firmware on to your server and upgrade your drive.
6. Use the download button to download the latest firmware to your server.
7. Reselect your drive.
8. Select your drive to be upgraded.
9. Click the Start Update button.
Wait for update to complete. Do not turn off your drive at this point. The drive LEDs will show the
update in progress.
10. When complete, you can reselect your drive and continue.
Checking installation
1. Run L&TT from the start menu. Start->Programs->HP StorageWorks Library and Tape
Tools->HP L&TT Installation Check. Wait for device selection screen (you will see several
initialization screens as the devices are located).
2. Select the device you want to check and click Start Verification.
NOTE: The system tests are data safe. The backup test is read-only and the restore test creates new data
files on the system. Data on the system is not overwritten.
NOTE: The disk subsystem performance tests are located under Sys Perf on main GUI.
The system performance backup Pre-test: Tests disk READs.
The system performance restore Pre-test: Test disk WRITEs.
Checking media
1. Run L&TT.
2. Select the drive.
3. Select the Test icon.
4. Select the drive.
5. Select Media Analysis test from the test group pull-down. The test defaults to five minutes of data
reading, so use the options if you want to do more than that.
6. Click Start Test and follow the instructions. Have the tape ready that you want to check. Note that this
tape will not be written to – the data is safe (though you can also set it to write protect, if desired).
When the test has completed, the results can be found under the Test Results tab.
NOTE: The Tru64 tar filename uses the letter o in place of zeroes. For L&TT 4.2, the filename is
hp_ltt42.tar.
NOTE: For Linux, the L&TT installer verifies that the operating system you are installing on is supported. If
the Linux distribution or release is unsupported, the install script displays a message indicating an
installation failure and lists the supported operating systems.
6. After the software is successfully installed, enter the following commands to remove the /tmp/ltt
directory and its contents:
cd /tmp
rm -rf ltt
rm -rf install_hpltt
NOTE: For more information regarding firmware upgrades, generating a support ticket, checking
performance, and/or using utilities in HP-UX, Tru64, or Linux, see the latest L&TT users guide chapter
concerning Command line functionality at the web page http://www.hp.com/support/, under
documentation.
HP technical support
Telephone numbers for worldwide technical support are listed on the HP support website:
http://www.hp.com/support/.
Collect the following information before calling:
• Technical support registration number (if applicable)
• Product serial numbers
• Product model names and numbers
• Applicable error messages
• Operating system type and revision level
• Detailed, specific questions
For continuous quality improvement, calls may be recorded or monitored.
HP strongly recommends that customers sign up online using the Subscriber's choice website:
http://www.hp.com/go/e-updates.
• Subscribing to this service provides you with e-mail updates on the latest product enhancements, newest
versions of drivers, and firmware documentation updates as well as instant access to numerous other
product resources.
After signing up, you can quickly locate your products by selecting Business support and then Storage
under Product Category.
More information
This chapter is very brief and is aimed at providing you with the most useful information about
troubleshooting with L&TT.
There is more detailed information available to you on the hp.com website at the following two areas:
• L&TT specific information. From the L&TT website: www.hp.com/support/tapetools, follow the link to
Technical Support & Documentation. The most comprehensive and easiest to use document is the L&TT
support chapter which is a Windows help file. You can download this document from here.
• Product specific troubleshooting. From the hp.com page select Support & Troubleshooting and
then enter your product into the search form.
B F
backup and recovery of Virtual Machines 143 Fibre Channel
buffered tape writes 95 connecting to switches 96
fabric 103
C HBAs 98
Interface Controller
cables
described 17, 93
described 17
Interface Manager 17, 77
clustering 156
discovery 79
Command View TL 77
troubleshooting 90
components
port configuration 93
listed 17
switched fabric configuration 93
configuration
tape controller, connecting 95
basic storage domain 105
file compression ratio 76
nearline 108
settings, FC controller 93
H
conventions
document 9 HBA
text symbols 10 described 17
installing 98
D performance 98
third-party 99
D2D2T backup 70
help, obtaining 10, 11
data compression ratio 76
host bus adapter
data protection software
PCI Fibre Channel for the Server 98
described 17
host device configuration 94
focus 13
HP
device
authorized reseller 11
connections, recommended 93
storage website 11
Disabling RSM polling
Subscriber’s choice website 10, 179
LTO tape driver 119
technical support 10
SDLT tape driver 119
HP-UX, configuring 110
discovery mode 94
disk-to-disk-to-tape backup 70
I
document
conventions 9 IBM AIX 139
prerequisites 9 indexed maps 95
related documentation 9 installing
drive clusters 21 HBA 98
E K
EBS known issues
described 13 management agents 147
Multi-Protocol Router 97 NAS 123
solution steps 13
support of failover 158 L
EML E-Series tape library LED
182
emulation 70 described 103
important concepts 69 optimizing resources 103
iSCSI protocol 69 overlapping 103
RAID 70 security 103
retention planning 71
VLS12000/300
benefits 64
components 65
redundancy 65
system status monitoring 64
VLS6000
features 44
setting bar code 44
VLS6105
cabling 48
rack order 45
VLS6109
cabling 48
rack order 45
VLS6200
disk array rack mounting order 45
VLS6218
cabling 48
VLS6227
cabling 48
VLS6500
disk array rack mounting order 45
VLS6510
cabling 48
VLS6518
cabling 48
VLS6600
cabling 49
disk array rack mounting order 46
VLS6840
cabling 50
rack order 47
VLS6870
cabling 50
rack order 47
VLS9000
benefits 52
components 52
features 52
installing cables 54
W
warning
rack stability 10
websites
Command View TL 91
HP storage 11
HP Subscriber’s choice 10, 179
WORM technology 75
Z
zoning
benefits 103
components 104