IBM Spectrum Protect 5.1 Configuration Guide
IBM Spectrum Protect 5.1 Configuration Guide
Version 5.1
Note:
Before you use this information and the product it supports, read the information in “Notices” on page
83.
Chapter 1. Introduction......................................................................................... 1
iii
Workload simulation tool results...............................................................................................................61
Appendix F. Troubleshooting................................................................................ 79
Appendix G. Accessibility.................................................................................... 81
Notices................................................................................................................83
Index.................................................................................................................. 87
iv
About this document
This information is intended to facilitate the deployment of an IBM Spectrum Protect server by using
detailed hardware specifications to build a system and automated scripts to configure the software. To
complete the tasks, you must have an understanding of IBM Spectrum Protect and scripting.
Overview
The following roadmap lists the main tasks that you must complete to deploy a server:
1. Determine the size of the configuration that you want to implement.
2. Review the requirements and prerequisites for the server system.
3. Set up the hardware by using detailed blueprint specifications for system layout.
4. Configure the hardware and install the Red Hat Enterprise Linux x86-64 operating system.
5. Prepare storage for IBM Spectrum Protect.
6. Run the IBM Spectrum Protect workload simulation tool to verify that your configuration is
functioning properly.
7. Install the IBM Spectrum Protect backup-archive client.
8. Install a licensed version of the IBM Spectrum Protect server.
9. Run the Blueprint configuration script to validate your hardware configuration, and then configure the
server.
10. Complete post-configuration steps to begin managing and monitoring your server environment.
The daily ingestion rate is the amount of data that you back up each day. The daily ingestion needs to be
completed in a backup window that leaves enough time remaining in the day to complete maintenance
tasks. For optimum performance, split the tasks of backing up and archiving client data, and performing
server data maintenance into separate time windows. The daily ingestion amounts in Table 1 on page 3
are based on test results with 128 MB sized objects, which are used by IBM Spectrum Protect for Virtual
Environments assuming a backup window of eight hours. The daily ingestion amount is stated as a range
because backup throughput, and the time that is required to complete maintenance tasks, vary based on
workload.
If a server is used to both accept backup data, and receive replicated data from other servers, more
planning is needed. Any data that is received through replication must be considered as part of the
daily backup amount. For example, a server that receives 25 TB of new backup data and 15 TB of
new replication data daily has a total ingestion rate of 40 TB per day. Optionally, backup data and data
received through replication can be placed in separate directory container storage pools.
Remember: If you are planning to create two replication copies of the backup data, you will to need to
consider it while selecting the size of the server. The daily amount of backup data has to be decreased to
reduce the amount of time required to back up data. This is done to compensate for the additional time
needed to create the second replication copy.
Not every workload can achieve the maximum amount in the range for daily backups. The range is a
continuum, and placement within the range depends on several factors:
Major factors
• Average object size. Workloads with smaller average object sizes, such as those that are common
with file server backups, typically have smaller backup throughputs. If the average object size is less
than 128 KB, daily backup amounts are likely to fall in the lower 25% of the range. If the average
object size is larger, for example, 512 KB or more, backup throughputs are greater.
Total managed data is the amount of data that is protected. This amount includes all versions. A
range is provided because data processing responds differently to data deduplication and compression,
depending on the type of data that is backed up. The smaller number in the range represents the physical
capacity of the IBM Spectrum Protect storage pool. Although the use of inline compression does not
result in additional growth of the IBM Spectrum Protect database, compression might result in the ability
to store more data in the same amount of storage pool space. In this way, the amount of total managed
data can increase causing more database space to be used.
To estimate the total managed data for your environment, you must have the following information:
• The amount of client data (the front-end data amount) that will be protected
• The number of days that backup data must be retained
• An estimate of the daily change percentage
• The backup model that is used for a client type, for example, incremental-forever, full daily, or full
periodic
If you are unsure of your workload characteristics, use the middle of the range for planning purposes.
You can calculate the total managed data for different types of clients in groups and then add the group
results.
Client types with incremental-forever backup operations
Use the following formula to estimate the total managed data:
4 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
Frontend + (Frontend * changerate * (retention - 1))
For example, if you back up 100 TB of front-end data, use a 30-day retention period, and have a 5%
change rate, calculate your total managed data as shown:
For example, if you back up 10 TB of front-end data, use a 30-day retention period, and have a 3%
change rate, calculate your total managed data as shown:
To efficiently maintain periodic copies of your data to meet long-term retention requirements, you can use
the retention set feature. Retention sets are created from existing backups without requiring data to be
redundantly sent to the IBM Spectrum Protect server. Retention sets can either be created in-place by
maintaining the existing backups for multiple retention requirements, or with copies made to tape media.
In-place retention sets will increase the amount of total managed data requiring additional storage pool
and database space. Retention set copies will require space in a retention pool, but have a very minimal
impact to database space.
Hardware requirements
You must acquire hardware that is based on scale size requirements. You can choose equivalent or better
components than what are listed.
The following topics list the hardware requirements for a extra small, small, medium, or large
configuration. The tables contain detailed descriptions, including part numbers and quantities for IBM®
components that are used in the storage configuration blueprints.
The system memory sizes that are provided are recommendations for optimal performance. They are not
minimum requirements. Memory recommendations account for using both data deduplication and node
replication with a database that is near maximum capacity. Some workloads can tolerate smaller amounts
of memory. When node replication is not used, the memory recommendations can be reduced by 25%.
The hardware specifications that are provided are current at the time of publishing. Part substitutions
might be required, depending on hardware availability over time. Be cautious if you plan to substitute
a smaller quantity of larger drives, particularly for the database. A smaller quantity of drives might not
provide comparable performance.
IBM FlashSystem storage systems are designed to provide a simple, high-performance solution for
managing block-based storage. For this reason, FlashSystem storage systems are suited for use by
the IBM Spectrum Protect server for both the database and storage pool. For more information about
FlashSystem features, see IBM Flash Storage family.
Note: The IBM FlashSystem 92 drive expansion racks require more rack depth than other disk expansion
options. Review the product specifications for rack requirements to make sure racks that support the
required depth are available.
Server and network • Four virtual processor cores, 1.7 VMware ESXi Version 6.7 or Virtual machine with a virtual hardware
GHz or faster 7.0 level of 13 or newer. VMware tools must
be installed.
• 24 GB RAM
• 1Gb or 10 Gb Ethernet 4-core virtual CPU
24 GB virtual RAM
Disks for storage Virtual disks can either be assigned When using virtual disks, Operating system disk
as RDM disks or as virtual disks. create the virtual disks
• Size: 90 GB
Virtual disks must be thickly for the operating system,
provisioned, and VMware snapshots database, and storage • Qty: 1
should not be taken of the virtual pools in different VMware
disks. datastores. Database
• Size: 100 GB
• Qty: 2
Active log
• Size: 30 GB
• Qty: 1
Archive log
• Size: 250 GB
• Qty: 1
Database backup
• Size: 1000 GB
• Qty: 1
Storage pool
• Size: 5000 GB
• Qty: 2
6 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
Hardware requirements for small systems
You must acquire hardware that is based on scale size requirements. You can choose equivalent or better
components than what are listed.
Server references are provided using Lenovo ThinkSystem SR650 servers. Equivalent x86_64 servers
from other manufactures can be substituted. :
• For Lenovo product information, see Lenovo ThinkSystem SR650 Rack Server. For hardware
requirements, see Table 3 on page 7.
Table 3. Hardware requirements for a small system that uses a Lenovo server
Server and network • 16 processor cores, 1.7 Lenovo ThinkSystem Lenovo ThinkSystem SR650 1 7X06CT01W
GHz or faster SR650
8-core Intel Xeon Bronze 3206 2 B7N3
• 64 GB RAM 1.9 GHz processor
• 10 Gb Ethernet
8 GB TruDDR4 2933 MHz 8 B4H1
• 8 Gb or 16 Gb Fibre memory
Channel adapter
Mellanox Connect X-4 L 1 AUAJ
10/25GbE 2-port PCIe Ethernet
adapter
Disks for storage • 16 Gb host interface IBM FlashSystem IBM FlashSystem 5015 SFF 1 2072-2N4
5015 Control
• Database and active log
disks: 800 GB SSD flash 16 Gb Fibre Channel adapter 1 ALBB
drives pair
• Storage pool disks: 4 TB
V5000E CACHE UPGRADE 1 ALGA
NL-SAS
800 GB 3DWPD 2.5 flash drive 4 AL8A
Blueprint
Hardware Requirements component Detailed description Quantity Part number
Server and • 20 processor cores, Lenovo ThinkSystem Lenovo ThinkSystem SR650 1 7X06CTO1W
network 2.2 GHz or faster SR650
10-core Intel Xeon Silver 4210 2 B4HS
• 192 GB RAM 2.2 GHz processor
• 10 Gb Ethernet
16 GB TruDDR4 2933 MHz 12 AUNC
• 8 Gb or 16 Gb Fibre memory
Channel adapter
Mellanox Connect X-4 L 1 AUAJ
10/25GbE 2-port PCIe Ethernet
adapter
Disks for • 16 Gb host interface IBM FlashSystem IBM FlashSystem 5035 SFF 1 2072-3N4
storage 5035 Control
• Database and active
log disks: 1.92 TB SSD 16 GB Fibre Channel adapter pair 1 ALBB
• Storage pool, archive
V5000E CACHE UPGRADE 1 ALGA
log, and database
backup disks: 6 TB 1.92 TB 2.5-inch flash drive 6 AL80
NL-SAS
5000 HD large form-factor (LFF) 1 2072-92G
expansion
8 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
Table 5. Hardware requirements for a large system that uses a Lenovo server (continued)
Disks for • Database and active IBM FlashSystem IBM FlashSystem 5200 NVMe Control 1 4662-6H2
storage log disks: 1.92 TB 5200 Enclosure
NVME FLASH DRIVE
16 Gb FC 4 Port Adapter Cards (pair) 1 ALBJ
• Storage pool, archive
log, and database IBM 512 GB Base Cache 1 ALG1
backup disks: 8 TB
1.92 TB 2.5-inch NVME flash drive 9 AGT2
NL-SAS drives
IBM FlashSystem 5200 High Density 2 4662-92G
Expansion Enclosure
1. Two of the three 300 GB internal hard disks are configured in a RAID 1 pair, and the third drive is assigned as a spare. If a spare is not
needed based on business requirements, the system can be configured with only two drives.
Large system
Table 6. Hardware requirements for a large system that uses IBM Elastic Storage System
Blueprint
Hardware Requirements component Detailed description Quantity Part number
Blueprint
Hardware Requirements component Detailed description Quantity Part number
Storage • Storage pool IBM Elastic Storage IBM Elastic Storage System system 1
system disks: 10TB System model 5000
SL6 and model Data server 2 5105-22E
Enterprise HDD
3200
• Database disks: Management server 1 5105-22E
3.84 TB PVMe
Storage Expansion 6 5147-092
Flash Drive
10 TB Enterprise HDD 550 AJNX
Software requirements
You must install the Linux operating system and the IBM Spectrum Protect server and backup-archive
client.
The following versions are required:
• Red Hat Enterprise Linux x86_64, Version 7.8 or later or Red Hat Enterprise Linux x86_64, Version 8.5
or later.
• IBM Spectrum Protect V8.1.12 or later backup-archive client.
• A licensed version of IBM Spectrum Protect is required to run the Blueprint configuration script. To
obtain critical fixes, install IBM Spectrum ProtectV8.1.14.100 or later. RHEL 8 support is available
starting with IBM Spectrum Protect was V8.1.11. At the time of publication, the latest level of IBM
Spectrum Protect was V8.1.16.
• The Blueprint configuration script V5.1 or later.
Planning worksheets
Use the planning worksheets to record values that you use when you complete the steps to set up your
system and then configure the IBM Spectrum Protect server. The preferred method is to use the default
values that are listed in the worksheets.
Default values in the following tables correspond to the default values that are used by the Blueprint
configuration script to configure the server. By using these values to create your file systems and
directories, you can accept all defaults for the configuration when you run the script. If you create
directories or plan to use values that do not match the defaults, you must manually enter those values for
the configuration.
10 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
Table 7. Values needed for preconfiguration
Directory for the /home/tsminst1/tsminst1 100 GB If you change the value for the
server instance server instance directory from the
default, modify the IBM Db2®
instance owner ID in Table 8 on
page 12 as well.
Directories for the /tsminst1/TSMdbspace00 Minimum total space for Create a minimum number of
database /tsminst1/TSMdbspace01 all directories: file systems for the database,
/tsminst1/TSMdbspace02 depending on the size of your
• Extra small: At least
and so on. system:
200 GB
• Extra small: At least 1 file
• Small: At least 1 TB
system
• Medium: At least 2 TB
• Small: At least 4 file systems
• Large: At least 4 TB
• Medium: At least 4 file systems
• Large: At least 8 file systems
Directories for /tsminst1/TSMfile00 Minimum total space for Create a minimum number of file
storage /tsminst1/TSMfile01 all directories: systems for storage, depending
/tsminst1/TSMfile02 on the size of your system:
• Extra small: At least
/tsminst1/TSMfile03 10 TB • Extra small: At least 2 file
and so on. systems
• Small: At least 38 TB
• Small: At least 2 file systems
• Medium: At least 180
TB • Medium: At least 10 file
systems
• Large: At least 500 TB
• Large: At least 30 file systems
Directories for /tsminst1/TSMbkup00 Minimum total space for Create a minimum number of
database backup /tsminst1/TSMbkup01 all directories: file systems for backing up the
/tsminst1/TSMbkup02 database, depending on the size
• Extra small: At least 1
/tsminst1/TSMbkup03 of your system:
TB
and so on. • Extra small: At least 1 file
• Small: At least 3 TB
system
• Medium: At least 10
• Small: At least 2 file systems
TB
• Medium: At least 3 file systems
• Large: At least 16 TB
• Large: At least 3 file systems
The first database backup
directory is also used for the
archive log failover directory and
a second copy of the volume
history and device configuration
files.
Use Table 8 on page 12 when you run the Blueprint configuration script to configure the server. The
preferred method is to use the default values, except where noted.
Table 8. Values needed for the server configuration
Db2 instance owner ID tsminst1 If you changed the value for the server instance
directory in Table 7 on page 11 from the default,
modify the value for the Db2 instance owner ID
as well.
Db2 instance owner There is no default for The user is required to select a value for the
password this value. instance owner password. Ensure that you record
this value in a secure location.
Server password There is no default for The user is required to select a value for the
this value. server password. Ensure that you record this
value in a secure location.
Administrator ID password There is no default for The user is required to select a value for the
this value. administrator password. Ensure that you record
this value in a secure location.
Schedule start time 22:00 The default schedule start time begins the client
workload phase, which is predominantly the
client backup and archive activities. During the
client workload phase, server resources support
client operations. These operations are usually
completed during the nightly schedule window.
Schedules for server maintenance operations are
defined to begin 10 hours after the start of the
client backup window.
12 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
Planning worksheets for server configurations
Table 9. Values needed for preconfiguration
14 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
Chapter 3. Storage configuration blueprints
After you acquire hardware for the scale of server that you want to build, you must prepare your storage to
be used with IBM Spectrum Protect. Configuration blueprints provide detailed specifications for storage
layout. Use them as a map when you set up and configure your hardware.
Specifications in “Hardware requirements” on page 5 and the default values in the “Planning worksheets”
on page 10 were used to construct the blueprints for small, medium, and large systems. If you deviate
from those specifications, you must account for any changes when you configure your storage.
Note: The IBM FlashSystem configurations implement fully-allocated volumes that do not use hardware
data reduction techniques including compression and deduplication. The IBM Spectrum Protect software
will perform the data reduction, and redundantly performing these tasks in the storage system will result
in performance problems.
If you are configuring a system with IBM Elastic Storage System, see “IBM Elastic Storage System” on
page 21.
Distributed arrays
You can use the distributed arrays feature with NL-SAS drives to achieve faster drive rebuild times in case
of a disk failure. FlashSystem distributed arrays, which contain 4 - 128 drives, also contain rebuild areas
that are used to maintain redundancy after a drive fails. The distributed configuration can reduce rebuild
times and decrease the exposure of volumes to the extra workload of recovering redundancy. If you plan
to use the 92-drive FlashSystem expansions, the preferred method is to create two 46-drive distributed
RAID 6 arrays per expansion.
If you are using a disk system that does not support distributed arrays, you must use traditional
storage arrays. For instructions about configuring traditional storage arrays, see the Blueprint and Server
Automated Configuration, Version 2 Release 3 guide for your operating system at the IBM Spectrum
Protect Blueprints website.
Tip: Earlier versions of the blueprints are available at the bottom of the blueprint web page.
Usage IBM Spectrum Protect server component that uses part of the physical
disk.
16 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
Small FlashSystem configuration
A small-scale system is based on IBM FlashSystem 5015 storage. One dual control enclosure and two
expansion enclosures contain IBM Spectrum Protect data.
Logical layout
Figure 2 on page 17 shows the small system layout and how server and storage hardware is connected
to clients. A single cluster and I/O group are used in this configuration. The small system configuration
was tested with 8 Gb Fibre Channel connections made directly from the host to the FlashSystem
5015 system without a SAN switch. The following image depicts a configuration that uses a Lenovo
ThinkSystem SR650 server.
Storage configuration
Table 13 on page 17 and Table 14 on page 18 show the detailed layout for each IBM Spectrum Protect
storage requirement on a small system.
Table 13. MDisk configuration
Suggested MDisk
Server storage Disk Disk Hot spare RAID array Usable group and array
requirement type quantity coverage RAID type quantity size names Usage
Server
storage Volume
requirement name Quantity Uses MDisk group Size Intended server mount point Usage
Logical layout
Figure 3 on page 18 shows the medium system layout and how server and storage hardware is
connected to clients. A single cluster and I/O group are used. The medium system configuration was
tested by using a SAN switch with 16 Gb Fibre Channel connections and two bonded 10 Gb Ethernet
connections. The image depicts a configuration that uses a Lenovo ThinkSystem SR650 server. .
The tables show multiple distributed arrays that are members of the same FlashSystem storage pool.
Alternatively, you can create split the arrays into separate storage pools.
18 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
Storage configuration
Table 15 on page 19 and Table 16 on page 19 show the detailed layouts for MDisk and volume
configurations on a medium system. The following array configuration requires the default FlashSystem
memory allocation for RAID to be increased, as described in Step “2” on page 68.
Table 15. MDisk configuration
Suggested MDisk
Server storage Disk Disk Hot spare RAID array Usable group and array
requirement type quantity coverage RAID type quantity size names Usage
Server
storage Volume
requirement name Quantity Uses MDisk group Size Intended server mount point Usage
Logical layout
Figure 4 on page 20 shows the large system layout and how server and storage hardware is connected
to clients. Testing for the large system configuration was completed by using a SAN switch with four 16
Gb Fibre Channel connections and four bonded 25 Gb Ethernet connections. The following image depicts
a configuration that uses a Lenovo ThinkSystem SR650 server.
The tables show multiple distributed arrays that are members of the same FlashSystem storage pool.
Alternatively, you can create split the arrays into separate storage pools.
Storage configuration
Table 17 on page 20 and Table 18 on page 21 show the detailed layouts for MDisk and volume
configurations on a large system. To allocate arrays across 184 drives, the memory that is available for
RAIDs must be increased to 125 MB, as described in Step “2” on page 68.
Table 17. MDisk configuration
Server storage Disk Hot spare RAID array Usable Suggested MDisk group and
requirement Disk type quantity coverage RAID type quantity size array names Usage
20 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
Table 18. Fully allocated volume configuration
Storage configuration
Some configuration steps are completed at the factory and by IBM services so that the system will be
ready for you to provision storage as single file systems from each disk system to be shared by multiple
IBM Spectrum Protect servers. These configuration steps include hardware installation and cabling,
software installation on the storage nodes, and configuration of the IBM Elastic Storage System cluster
and recovery groups.
For more information about IBM Elastic Storage System, see the online product documentation.
22 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
Chapter 4. Setting up the system
You must set up hardware and preconfigure the system before you run the IBM Spectrum Protect
Blueprint configuration script.
Procedure
1. Configure your storage hardware according to the blueprint specifications and manufacturer
instructions. Follow the instructions in “Step 1: Set up and configure hardware” on page 23.
2. Install the Linux operating system on the server. Follow the instructions in “Step 2: Install the
operating system” on page 25.
3. IBM Flash System storage: Configure multipath I/O for disk storage devices. Follow the instructions in
“Step 3: IBM FlashSystem Storage: Configure multipath I/O” on page 28.
4. IBM FlashSystem Storage: Create file systems for IBM Spectrum Protect. Follow the instructions in
“Step 4: IBM FlashSystem Storage: Configure file systems for IBM Spectrum Protect” on page 29.
5. IBM Elastic Storage System: Configure the IBM Elastic Storage System. Follow the instructions in
“Step 5: IBM Elastic Storage System: Configuring the system” on page 31.
6. Test system performance with the IBM Spectrum Protect workload simulation tool, [Link].
Follow the instructions in “Step 6: Test system performance” on page 34.
7. Install the IBM Spectrum Protect backup-archive client. Follow the instructions in “Step 7: Install the
IBM Spectrum Protect backup-archive client” on page 37.
8. Install the IBM Spectrum Protect license and server. Follow the instructions in “Step 8: Install the IBM
Spectrum Protect server” on page 38.
Procedure
1. Connect your hardware according to manufacturer instructions. For optimal system performance, use
at least 8 Gb SAN fabric for connections. If you are using a SAN switch, ensure that it is capable of 8,
16, or 32 Gb connection speeds.
• For server SAN cabling, use both Fibre Channel connection ports in the dual-port adapters
for optimal throughput. Use all four ports in the two dual-port adapters on large systems. All
configurations should support a Fibre Channel connection directly to storage or to a SAN switch.
• For storage subsystem SAN cabling, connect at least two cables to each storage host controller. For a
large blueprint, at least four host ports should be cabled on the storage system.
2. Check for system BIOS updates from the server vendor and apply any suggested changes.
3. Configure the disk system.
To configure a IBM FlashSystem disk system, complete the following steps.
cat /sys/class/fc_host/host*/port_name
0x10000090fa49009e
0x10000090fa49009f
0x10000090fa3d8f12
0x10000090fa3d8f13
If your host is unable to see any devices form the storage system it may be necessary to disable
virtualization on one more of the host ports on the IBM FlashSystem.
4. If you attach IBM FlashSystem and IBM Spectrum Protect servers to a SAN fabric, create zones to
ensure that specific Fibre Channel ports on the IBM Spectrum Protect server can communicate with
specific IBM FlashSystem host ports. During testing, the following guidelines were followed:
a. A separate zone was created for each Fibre Channel port on the IBM Spectrum Protect server so
that each zone contained no more than one server port.
b. Each zone contained one IBM FlashSystem host port from each node canister.
Before you create zones, review the following examples for medium and large systems. The examples
are appropriate for a single fabric environment in which the host and disk subsystems are attached to a
single switch.
Medium system
a. On the IBM Spectrum Protect server, both Fibre Channel ports on the dual port Fibre Channel
adapter are cabled and are referred to as ha1p1 and ha1p2.
b. Two of the host ports on the IBM FlashSystem server are cabled (one from each node canister)
and are referred to as n1p1 and n2p1.
c. Two zones are created with the following members:
Large system
a. On the IBM Spectrum Protect server, all four Fibre Channel ports across the two dual port
adapters are cabled. The ports are referred to as ha1p1, ha1p2, ha2p1, and ha2p2.
b. Four of the host ports on the IBM FlashSystem server are cabled (two from each node canister)
and are referred to as n1p1, n1p2, n2p1, and n2p2.
24 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
c. Four zones are created with the following members:
For additional guidelines about achieving optimal performance and redundancy, see the SAN
configuration and zoning rules summary in IBM Documentation.
Procedure
1. Install Red Hat Enterprise Linux Version 7.8 or later or Version 8.5 or later, according to the
manufacturer instructions.
Important: Alternatively, you can also choose to install the following operating systems on the server
system:
• SUSE Linux Enterprise Server 15 or later version
• Ubuntu 18.04 LTS or later version
Obtain a bootable DVD or .ISO image that contains Red Hat Enterprise Linux at a supported version
and start your system from this media. See the following guidance for installation options. If an item is
not mentioned in the following list, leave the default selection.
a) After you start the operating system installation media, choose Install or upgrade an existing
system from the menu.
b) On the Welcome screen, select Test this media & install Red Hat Enterprise Linux 8.x.
c) Select your language and keyboard preferences.
d) Select your location to set the correct timezone.
e) Select Software Selection and then on the next screen, select Server with GUI.
f) From the installation summary page, click Installation Destination and verify the following items:
• The local 300 GB disk is selected as the installation target.
• Under Other Storage Options, Automatically configure partitioning is selected.
Click Done.
g) Click Begin Installation.
After the installation starts, set the root password for your root user account.
After the installation is completed, restart the system and log in as the root user. Issue the df
command to verify your basic partitioning.
For example, on a test system, the initial partitioning produced the following result:
[root@tvapp02]# df –h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/rhel-root 50G 3.0G 48G 6% /
devtmpfs 32G 0 32G 0% /dev
tmpfs 32G 92K 32G 1% /dev/shm
tmpfs 32G 8.8M 32G 1% /run
2. Configure your TCP/IP settings according to the operating system installation instructions.
For optimal throughput and reliability, consider bonding multiple network ports together. Bond two
ports for a medium system and four ports for a large system. This can be accomplished by creating a
Link Aggregation Control Protocol (LACP) network connection, which aggregates several subordinate
ports into a single logical connection. The preferred method is to use a bond mode of 802.3ad,
miimon setting of 100, and a xmit_hash_policy setting of layer3+4.
Restriction: To use an LACP network connection, you must have a network switch that supports LACP.
For additional instructions about configuring bonded network connections with Red Hat Enterprise
Linux Version 7, see Create a Channel Bonding Interface.
3. Open the /etc/hosts file and complete the following actions:
• Update the file to include the IP address and host name for the server. For example:
• Verify that the file contains an entry for localhost with an address of [Link]. For example:
[Link] localhost
4. Install components that are required for the server installation. Complete the following steps to create
a Yellowdog Updater Modified (YUM) repository and install the prerequisite packages.
a) Mount your Red Hat Enterprise Linux installation DVD to a system directory. For example, to mount
it to the /mnt directory, issue the following command:
cd /etc/yum/repos.d
For RHEL 8:
cd /etc/[Link].d
ls [Link]
mv [Link] [Link]
vi rhel78_dvd.repo
g) Add the following lines to the new repo file. The baseurl parameter specifies your directory mount
point:
26 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
[rhel78_dvd]
name=DVD Redhat Enterprise Linux 7.8
baseurl=[Link]
enabled=1
gpgcheck=0
For RHEL 8:
[InstallMedia-BaseOS]
name=Red Hat Enterprise Linux 8.2.0
mediaid=None
metadata_expire=-1
gpgcheck=0
cost=500
enabled=1
baseurl=[Link]
[InstallMedia-AppStream]
name=Red Hat Enterprise Linux 8.2.0
mediaid=None
metadata_expire=-1
gpgcheck=0
cost=500
enabled=1
baseurl=[Link]
5. When the software installation is complete, you can restore the original YUM repository values by
completing the following steps:
a) Unmount the Red Hat Enterprise Linux installation DVD by issuing the following command:
unmount /mnt
cd /etc/yum/repos.d
mv rhel78_dvd.repo rhel78_dvd.[Link]
mv [Link] [Link]
6. Open firewall ports to communicate with the server. Complete the following steps:
a) Determine the zone that is used by the network interface. The zone is public, by default.
Issue the following command:
# firewall-cmd --get-active-zones
public
interfaces: ens4f0
b) To use the default port address for communications with the server, open TCP/IP port 1500 in the
Linux firewall.
Issue the following command:
firewall-cmd --reload
7. On certain systems, particularly those with more than two CPU sockets, long pauses impacting
performance have been observed when the NUMA service attempts to optimize memory. To avoid
this, disable the NUMA service.
Procedure
1. Edit the /etc/[Link] file to enable multipathing for Linux hosts.
If the [Link] file does not exist, you can create it by issuing the following command:
mpathconf --enable
The following parameters were set in [Link] for testing on an IBM FlashSystem storage
system:
defaults {
user_friendly_names no
}
devices {
device {
vendor "IBM "
product "2145"
path_grouping_policy group_by_prio
user_friendly_names no
path_selector "round-robin 0"
prio "alua"
path_checker "tur"
failback "immediate"
no_path_retry 5
rr_weight uniform
rr_min_io_rq "1"
dev_loss_tmo 120
}
}
3. Increase the SCSI timeout for better handling of path failures. For a persistent change, edit the
file /etc/sysconfig/grub and add the following to the GRUB_CMDLINE_LINUX line:
28 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
scsi_mod.inq_timeout=70
grub2-mkconfig -o /etc/[Link]
Also, run the following command for an immediate change in addition to the grub update:
4. To verify that disks are visible to the operating system and are managed by multipath, issue the
following command:
multipath -l
5. Ensure that each device is listed and that it has as many paths as you expect. You can use size and
device ID information to identify which disks are listed.
For example, the following output shows that a 2 TB disk has two path groups and four active paths.
The 2 TB size confirms that the disk corresponds to a pool file system. Use part of the long device ID
number (12, in this example) to search for the volume on the disk-system management interface.
a) If needed, correct disk LUN host assignments and force a bus rescan.
For example:
You can also restart the system to rescan disk LUN host assignments.
b) Confirm that disks are now available for multipath I/O by reissuing the multipath -l command.
6. Use the multipath output to identify and list device IDs for each disk device.
For example, the device ID for your 2 TB disk is 36005076802810c509800000000000012.
Save the list of device IDs to use in the next step.
Procedure
1. Open a terminal window and change to the directory where you downloaded the
tsmconfig_v51.[Link] file.
2. Extract the file by issuing the following commands:
gzip -d tsmconfig_v51.[Link]
tar -xvf tsmconfig_v51.tar
The process creates a directory that is called tsmconfig. This directory contains the storage
preparation script, the workload simulation tool, and the Blueprint configuration script.
3. Change to the tsmconfig directory by issuing the following command:
cd tsmconfig
4. Run the Perl script and specify the size of system that you are configuring.
For example, for a medium system, issue the following command:
If you did not map the disks to the host according to the specifications in “Step 3: IBM FlashSystem
Storage: Configure multipath I/O” on page 28, the script requires customization.
5. List all file systems by issuing the df command.
Verify that file systems are mounted at the correct LUN and mount point. Also, verify the available
space. The amount of used space should be approximately 1%.
For example:
Procedure
1. Using the list of device IDs that you generated in “Step 3: IBM FlashSystem Storage: Configure
multipath I/O” on page 28, issue the mkfs command to create and format a file system for each
storage LUN device. Specify the device ID in the command.
For IBM Spectrum Protect V8, format file systems with a command that is similar to the following
example:
mkdir /tsminst1
30 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
Repeat the mkdir command for each file system.
If you are not using the default paths for your directories, you must manually list directory paths during
configuration of the IBM Spectrum Protect server.
3. Add an entry in the /etc/fstab file for each file system so that file systems are mounted
automatically when the server is started. The entry varies the file system type depending on which
file system types were formatted in the previous step.
For example, add the following entry for a XFS file system where the device name is adapted for the
actual device name on your system:
4. Mount the file systems that you added to the /etc/fstab file by issuing the mount -a command.
5. List all file systems by issuing the df command.
Verify that file systems are mounted at the correct LUN and correct mount point. Also, verify the
available space. The amount of used space should be approximately 1%.
For example:
Procedure
1. On the IBM Spectrum Protect system, configure TCP/IP settings according to the manufacturer
instructions.
Use different network adapters for communication between server and clients, and the server and
the IBM Elastic Storage System.
2. On the IBM Spectrum Protect system, install IBM Spectrum Scale:
a) Download the IBM Spectrum Scale base software package at Passport Advantage.
b) Download the latest IBM Spectrum Scale fix pack at Fix Central.
c) Install the IBM Spectrum Scale base software.
Follow the instructions in Installing IBM Spectrum Scale on Linux nodes and deploying protocols.
d) Install the IBM Spectrum Scale fix pack.
3. Ensure that the kernel is portable by issuing the following command:
4. Configure a Secure Shell (SSH) automatic login procedure without a password between the IBM
Spectrum Protect server and the IBM Elastic Storage System management node and storage nodes.
Take one of the following actions:
• If the /root/.ssh/id_rsa.pub file is not available on the IBM Spectrum Protect server, generate
an id_rsa.pub file. The file contains a public key. Issue the following commands from an IBM
Elastic Storage System storage node that is part of the cluster:
ssh-keygen –t rsa
cd /root/.ssh
chmod 640 /root/.ssh/authorized_keys
• If the /root/.ssh/id_rsa.pub file is available on the IBM Spectrum Protect server, complete
the following steps:
a. Append the contents of the id_rsa.pub file to the end of the authorized_keys file on each
of the systems in the IBM Spectrum Scale cluster.
b. Append the contents of the id_rsa.pub file from each of the other systems in the cluster to the
authorized_keys file on the IBM Spectrum Protect server.
5. Verify that the login procedure is configured. Log in to the other computers in the cluster from the
IBM Spectrum Protect server by running the ssh command without using a password.
6. If the operating system on the IBM Spectrum Protect server is running a firewall, open several
ports for incoming network connections from other systems in the IBM Spectrum Scale cluster. For
instructions, see Securing the IBM Spectrum Scale system by using a firewall.
7. Update the /etc/hosts file on the IBM Spectrum Scale nodes with information about the IBM
Spectrum Protect server.
8. Add the IBM Spectrum Protect system as an IBM Spectrum Scale node in the cluster by running the
mmaddnode command. Issue the command from an IBM Elastic Storage System node that is part of
the cluster.
For example, if the IBM Spectrum Protect IP address is [Link], you would issue the following
command:
mmaddnode -N [Link]
9. Assign an IBM Spectrum Scale license to the IBM Spectrum Protect server. From an IBM Elastic
Storage System node that is part of the cluster, issue the following command:
where server_ip_address specifies the IP address of the IBM Spectrum Protect server.
10. To optimize the IBM Spectrum Protect server workload, tune IBM Spectrum Scale client-side
parameters by using the mmchconfig command.
Issue the following command from an IBM Elastic Storage System node that is part of the cluster:
mmchconfig disableDIO=yes,aioSyncDelay=10,pagepool=24G,prefetchAggressivenessRead=0 -N
server_ip_address
where server_ip_address specifies the IP address of the IBM Spectrum Protect server.
If IBM Spectrum Scale replication will be used, the following settings are also required on the IBM
Spectrum Protect server to avoid inaccurate capacity reporting.
mmchconfig ignoreReplicaSpaceOnStat=yes -i
mmchconfig ignoreReplicationForQuota=yes -i
mmchconfig ignoreReplicationOnStatfs=yes -i
11. Create the IBM Spectrum Scale file system on the IBM Elastic Storage System system:
a) Verify that the expected factory configuration of a left and right recovery group is in place by using
the mmlsrecoverygroup command:
32 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
i) Review the command output to verify that two recovery groups exist, and each group has three
predefined declustered arrays.
ii) Record the recovery group names, which are required in step “11.b” on page 33.
b) Create a stanza file that defines parameters for each virtual disk:
i) Specify VDisks in the DA1 declustered array from both recovery groups.
ii) Use an 8+2p RAID code for the storage pool data and the 3WayReplication RAID code for the
IBM Spectrum Scale file system metadata.
For example, create a file that is named /tmp/ess_vdisk that contains the following
information:
# cat /tmp/ess_vdisk
%vdisk: vdiskName=GL2_A_L_meta_256k_1 rg=GL2_A_L da=DA1 blocksize=256k
size=500g raidCode=3WayReplication diskUsage=metadataOnly pool=system
%vdisk: vdiskName=GL2_A_R_meta_256k_1 rg=GL2_A_R da=DA1 blocksize=256k
size=500g raidCode=3WayReplication diskUsage=metadataOnly pool=system
%vdisk: vdiskName=GL2_A_L_data_8m_1 rg=GL2_A_L da=DA1 blocksize=8m
raidCode=8+2p diskUsage=dataOnly pool=data
%vdisk: vdiskName=GL2_A_R_data_8m_1 rg=GL2_A_R da=DA1 blocksize=8m
raidCode=8+2p diskUsage=dataOnly pool=data
Because a size is not specified for the two storage pool VDisks, they use all of the remaining space
on the declustered arrays.
Tip: For larger file systems, you might have to specify more than two VDisks to meet business
requirements. Create VDisks in multiples of 50 TB. Specify the size of the VDisk by using the SIZE
parameter. For example, to create a 400® TB file system, create eight 50 TB VDisks. Stanza entries
are similar to the following example:
%vdisk: vdiskName=GL2_A_L_data_8m_1
rg=GL2_A_L da=DA1 blocksize=8m size=50t raidCode=8+2p
diskUsage=dataOnly pool=data
c) Create disks by running the mmcrvdisk and mmcrnsd commands and by using the stanza file that
you created in step “11.b” on page 33. The mmcrvdisk command creates virtual disks, and the
mmcrnsd command creates IBM Spectrum Scale disks by using the virtual disks.
For example, if the VDisk stanza is called /tmp/ess_vdisk, you would issue the following
commands:
mmcrvdisk -F /tmp/ess_vdisk
mmcrnsd -F /tmp/ess_vdisk
d) Create a single IBM Spectrum Scale file system by using the mmcrfs command and specifying the
stanza file. Use the 8 MB block size for data and 256 KB for metadata.
For example:
e) Mount the IBM Spectrum Scale file system on the IBM Spectrum Protect system. On the IBM
Spectrum Protect system, issue mmmount command.
For example:
mmmount /esstsm1
f) Verify the amount of free space in the IBM Spectrum Scale file system.
The command and output are similar to the following example:
g) Set IBM Spectrum Scale to automatically start when the system starts by using the chkconfig
command.
For example:
chkconfig gpfs on
h) Verify that the VDisks and file system were created correctly by using the mmlsvdisk and mmlsfs
commands.
For example:
mmlsvdisk
mmlsfs /dev/esstsm1
12. Repeat the above steps to create a file system on the IBM Elastic Storage System 3200 flash storage
to be used for the IBM Spectrum Protect database.
What to do next
If you upgrade the Linux operating system to newer kernel levels or you upgrade IBM Spectrum Scale, you
must rebuild the portability layer. Follow the instructions in step “3” on page 31.
For more information about completing the steps in the procedure, see the online product
documentation:
Instructions for configuring IBM Elastic Storage System
Instructions for installing IBM Spectrum Scale
IBM Spectrum Scale command reference information
34 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
Storage pool workload
The storage pool workload simulates IBM Spectrum Protect server-side data deduplication, in which
large, 256 KB block-size sequential read and write operations are overlapped. The write process
simulates incoming backups while the read operation simulates identification of duplicate data. The
tool creates a read and write thread for every file system that is included in the test, allowing multiple
sessions and processes to be striped across more than one file system.
You can also simulate a storage pool workload that conducts only read I/O or only write I/O
operations:
• Simulate restore operations by specifying the mode=readonly option.
• Simulate backup operations by specifying the mode=writeonly option.
Database workload
The database workload simulates IBM Spectrum Protect database disk access in which small, 8 KB
read and write operations are performed randomly across the disk. For this workload, 10 GB files
are pre-created on each of the specified file systems and then read and write operations are run to
random ranges within these files. Multiple threads are issued against each file system, sending I/O
requests simultaneously.
For the database workload, configurations typically have one file system for each pool on the storage
array. Include all database file systems when you are testing the database workload.
To use the tool effectively, experiment with test runs by including different quantities of file systems in the
simulation until the performance of the system diminishes.
Depending on disk speed and the number of file systems that you are testing, the time that is required to
run the script can be 3 - 10 minutes.
Procedure
To use the workload simulation tool, complete the following steps:
1. Plan to test either the storage pool file systems or the database file systems.
2. Collect a list of the file systems that are associated with your chosen type of storage. Break the file
systems into groups according to which pool they belong to on the disk system.
Grouping is used to ensure that physical disks from all volumes on all arrays for the storage type
are engaged in the test. To review groupings for file systems, see the volume configuration tables in
Chapter 3, “Storage configuration blueprints,” on page 15.
IBM Elastic Storage System: Because only a single IBM Spectrum Scale file system is defined for
storage, you must create temporary directories to use when you run the workload simulation tool and
specify the -fslist option. For example, issue the mkdir command to create temporary directories:
mkdir /esstsm1/perftest/1
mkdir /esstsm1/perftest/2
< ... >
mkdir /esstsm1/perftest/14
3. To run the tool, change to the tsmconfig directory by issuing the following command:
cd tsmconfig
If you did not extract the Blueprint configuration script compressed file to prepare file systems for IBM
Spectrum Protect, follow the instructions in “Configure a file system by using the script” on page 30.
4. Run an initial test of the workload that includes one file system of the storage type from each pool on
the storage array.
For example, to simulate the IBM Spectrum Protect storage pool workload on a medium-scale system,
issue the following command:
For example, to simulate backup operations (by using only write I/O) for an IBM Spectrum Protect
storage pool workload on a medium-scale system, issue the following command:
To simulate the database workload on a small-scale system and include all four of the database file
systems, issue the following command:
Results
The performance results that are provided when you run the workload simulation tool might not
represent the maximum capabilities of the disk subsystem that is being tested. The intent is to provide
measurements that can be compared against the lab results that are reported for medium and large
systems.
The workload simulation tool is not intended to be a replacement for disk performance analysis tools.
Instead, you can use it to spot configuration problems that affect performance before you run IBM
Spectrum Protect workloads in a production environment. Problems will be evident if the measurements
from test runs are significantly lower than what is reported for test lab systems. If you are using hardware
other than the Storwize® components that are included in this document, use your test results as a rough
estimate of how other disk types compare with the tested configurations.
Example
This example shows the output from a storage pool workload test on a small system. Eight file systems
are included. The following command is issued:
===================================================================
: IBM Spectrum Protect disk performance test (Program version 5.1)
:
: Workload type: stgpool
: Number of filesystems: 8
: Mode: readwrite
: Files to write per fs: 5
: File size: 2 GB
:
===================================================================
:
36 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
: Beginning I/O test.
: The test can take upwards of ten minutes, please be patient ...
: Starting write thread ID: 1 on filesystem /tsminst1/TSMfile00
: Starting read thread ID: 2 on filesystem /tsminst1/TSMfile00
: Starting write thread ID: 3 on filesystem /tsminst1/TSMfile01
: Starting read thread ID: 4 on filesystem /tsminst1/TSMfile01
: Starting write thread ID: 5 on filesystem /tsminst1/TSMfile02
: Starting read thread ID: 6 on filesystem /tsminst1/TSMfile02
: Starting write thread ID: 7 on filesystem /tsminst1/TSMfile03
: Starting read thread ID: 8 on filesystem /tsminst1/TSMfile03
: Starting write thread ID: 9 on filesystem /tsminst1/TSMfile04
: Starting read thread ID: 10 on filesystem /tsminst1/TSMfile04
: Starting write thread ID: 11 on filesystem /tsminst1/TSMfile05
: Starting read thread ID: 12 on filesystem /tsminst1/TSMfile05
: Starting write thread ID: 13 on filesystem /tsminst1/TSMfile06
: Starting read thread ID: 14 on filesystem /tsminst1/TSMfile06
: Starting write thread ID: 15 on filesystem /tsminst1/TSMfile07
: Starting read thread ID: 16 on filesystem /tsminst1/TSMfile07
: All threads are finished. Stopping iostat process with id 15732
===================================================================
: RESULTS:
: Devices reported on from output:
: dm-25
: dm-28
: dm-7
: dm-6
: dm-4
: dm-8
: dm-12
: dm-15
:
: Average R Throughput (KB/sec): 227438.06
: Average W Throughput (KB/sec): 224826.38
: Avg Combined Throughput (MB/sec): 441.66
: Max Combined Throughput (MB/sec): 596.65
:
: Average IOPS: 1767.16
: Peak IOPS: 2387.43 at 08/05/2015 [Link]
:
: Total elapsed time (seconds): 171
===================================================================
What to do next
Compare your performance results against test lab results by reviewing sample outputs for storage pool
and database workloads on both medium and large systems:
• For the storage pool workload, the measurement for average combined throughput in MB per second
combines the read and write throughput. This is the most useful value when you compare results.
• For the database workload, the peak IOPS measurements add the peak read and write operations per
second for a specific time interval. This is the most useful value when you compare results for the
database workload.
To review the sample outputs, see Appendix A, “Performance results,” on page 57.
Procedure
1. Change to the directory where you downloaded the client package files.
For detailed installation instructions, see Installing the backup-archive clients in IBM Knowledge
Center.
Tip: If available, you can display different versions of the same topic by using the versions menu at the
top of the page.
ulimit -Hf
2. If the system user limit for maximum file size is not set to unlimited, change it to unlimited by following
the instructions in the documentation for your operating system.
Procedure
1. Download the server installation package from Passport Advantage or Fix Central.
2. For the latest information, updates, and maintenance fixes, go to the IBM Support Portal.
3. Complete the following steps:
a) Verify that you have enough space to store the installation files when they are extracted from the
product package. See the download document for the space requirements:
• IBM Spectrum Protect: technote 4042992
• IBM Spectrum Protect Extended Edition: technote 4042992
b) Download the package to the directory of your choice. The path must contain no more than 128
characters. Be sure to extract the installation files to an empty directory. Do not extract the files to a
directory that contains previously extracted files, or any other files.
38 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
c) Ensure that executable permission is set for the package. If necessary, change the file permissions
by issuing the following command:
[Link]-IBM-SPSRV-Linuxx86_64.bin
d) Extract the file from the package by issuing the following command:
./package_name
Procedure
To install IBM Spectrum Protect, complete the following steps:
1. Change to the directory where you downloaded the package.
2. Start the installation wizard in console mode by issuing the following command:
./[Link] -c
Optional: Generate a response file as part of a console mode installation. Complete the console mode
installation options, and in the Summary window, specify G to generate the responses.
Results
If errors occur during the installation process, the errors are recorded in log files that are stored in the
IBM Installation Manager logs directory, for example:
/var/ibm/InstallationManager/logs
40 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
Chapter 5. Configuring the IBM Spectrum Protect
server
Run the Blueprint configuration script, [Link], to configure the IBM Spectrum Protect
server.
Procedure
1. Open a terminal window.
2. If you did not extract the Blueprint configuration script compressed file to prepare file systems for IBM
Spectrum Protect, follow the instructions in “Configure a file system by using the script” on page 30.
3. Change to the tsmconfig directory by issuing the following command:
cd tsmconfig
perl [Link]
If you want to enable compression for the archive log and database backups on a small system,
issue the following command:
Depending on how you preconfigured the system, you can accept the default values that are
presented by the script. Use the information that you recorded in the “Planning worksheets” on
page 10 as a guide. If you changed any of the default values during the preconfiguration step, you
must manually enter your values at the script prompts.
• To run the configuration script in noninteractive mode by using a response file to set configuration
values, specify the response file when you run the script. For example:
– To use the default response file for a medium system, issue the following command:
– To use the default response file for a small system and enable compression for the archive log
and database backups, issue the following command:
– To use the default response file for a system that uses IBM Elastic Storage System, issue the
following command:
If you encounter a problem during the configuration and want to pause temporarily, use the quit
option. When you run the script again, it resumes at the point that you stopped. You can also open
other terminal windows to correct any issues, and then return to and continue the script. When the
script finishes successfully, a log file is created in the current directory.
5. Save the log file for future reference.
The log file is named setupLog_datestamp.log where datestamp is the date on which you ran
the configuration script. If you run the script more than once on the same day, a version number is
appended to the end of the name for each additional version that is saved.
For example, if you ran the script three times on July 27, 2013, the following logs are created:
• setupLog_130727.log
• setupLog_130727_1.log
42 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
• setupLog_130727_2.log
Results
After the script finishes, the server is ready to use. Review Table 19 on page 43 and the setup log file for
details about your system configuration.
Table 20 on page 47 provides details about kernel parameter values for the system. Also, consider
tuning the TCPWINDOWSIZE option to 0 for Linux servers and clients.
Table 19. Summary of configured elements
Item Details
Operating system user limits (ulimits) The following values are set:
for the instance user
• Maximum size of core files created (core): unlimited
• Maximum size of a data segment for a process (data): unlimited
• Maximum file size allowed (fsize): unlimited
• Maximum number of open files that are allowed for a process (nofile): 65536
• Maximum amount of processor time in seconds (cpu): unlimited
• Maximum number of user processes (nproc): 16384
IBM Spectrum Protect API • An API [Link] file is created in the /opt/tivoli/tsm/server/bin/dbbkapi/ directory.
The following parameters are set. Some values might vary, depending on selections that were
made during the configuration:
servername TSMDBMGR_tsminst1
tcpserveraddr localhost
commmethod tcpip
tcpserveraddr localhost
tcpport 1500
passworddir /home/tsminst1/tsminst1
errorlogname /home/tsminst1/tsminst1/[Link]
nodename $$_TSMDBMGR_$$
• The API password is set.
Server settings
• The server is configured to start automatically when the system is started.
• An initial system level administrator is registered.
• The server name and password are set.
• The following values are specified for SET commands:
– SET ACTLOGRETENTION is set to 180.
– SET EVENTRETENTION is set to 180.
– SET SUMMARYRETENTION is set to 180.
Item Details
IBM Spectrum Protect server options The [Link] file is set with optimal parameter values for server scale. The following server
file options are specified:
• ACTIVELOGSIZE is set according to scale size:
– Extra Small system: 24576
– Small system: 131072
– Medium system: 131072
– Large system: 524032
• If you enabled compression for the blueprint configuration, ARCHLOGCOMPRESS is set to Yes.
• COMMTIMEOUT is set to 3600 seconds.
• If you are using the -legacy option for data deduplication, DEDUPDELETIONTHREADS is set
according to scale size:
– Extra Small system: 2
– Small system: 8
– Medium system: 8
– Large system: 12
• DIOENABLED Is set to NO for IBM Elastic Storage System configurations when a directory-
container storage pool is created.
• DIRECTIO is set to NO for IBM Elastic Storage System configurations. For Storwize
configurations, the preferred method is to use the default value of YES.
• DEDUPREQUIRESBACKUP is set to NO.
• DEVCONFIG is specified as [Link], which is where a backup copy of device
configuration information will be stored.
• EXPINTERVAL is set to 0, so that expiration processing runs according to schedule.
• IDLETIMEOUT is set to 60 minutes.
• MAXSESSIONS is set according to scale size:
– Extra Small system: 75 maximum simultaneous client sessions
– Small system: 250 maximum simultaneous client sessions
– Medium system: 500 maximum simultaneous client sessions
– Large system: 1000 maximum simultaneous client sessions
The effective value for the SET MAXSCHEDSESSIONS option is 80% of the value that was
specified for the MAXSESSIONS option:
– Extra Small system: 45 sessions (60%)
– Small system: 200 sessions
– Medium system: 400 sessions
– Large system: 800 sessions
• NUMOPENVOLSALLOWED is set to 20 open volumes.
• TCPWINDOWSIZE is set to 0
• VOLUMEHISTORY is specified as [Link], which is where the server will store a backup
copy of volume history information. In addition to [Link], which will be stored in the
server instance directory, a second volume history option is specified to be stored in the first
database backup directory for redundancy.
44 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
Table 19. Summary of configured elements (continued)
Item Details
IBM Spectrum Protect server options Server options that are related to database reorganization are specified in the following sections.
file: database reorganization options
Servers at V7.1.1 or later:
• ALLOWREORGINDEX is set to YES.
• ALLOWREORGTABLE is set to YES.
• DISABLEREORGINDEX is not set.
• DISABLEREORGTABLE is set to
BF_AGGREGATED_BITFILES,BF_BITFILE_EXTENTS,
ARCHIVE_OBJECTS,BACKUP_OBJECTS
Directory-container storage pool A directory-container storage pool is created, and all of the storage pool file systems are defined
as container directories for this storage pool. The following parameters are set in the DEFINE
STGPOOL command:
• STGTYPE is set to DIRECTORY.
• MAXWRITERS is set to NOLIMIT.
For servers at V7.1.5 or later, compression is automatically enabled for the storage pool.
Storage pool if the -legacy option is • A FILE device class is created and tuned for configuration size:
specified
– All storage pool file systems are listed with the DIRECTORY parameter in the DEFINE
DEVCLASS command.
– The MOUNTLIMIT parameter is set to 4000 for all size systems.
– The MAXCAP parameter is set to 50 GB for all size systems.
• The storage pool is created with settings that are tuned for configuration size:
– Data deduplication is enabled.
– The value of the IDENTIFYPROCESS parameter is set to 0 so that duplicate identification
can be scheduled.
– Threshold reclamation is disabled so that it can be scheduled.
– The MAXSCRATCH parameter value is tuned based on the amount of storage that is available
in the FILE storage pool.
Item Details
Item Details
Management classes Management classes are created within the policy domains that are listed in the previous row.
Retention periods are defined for 7, 30, 90, and 365 days.
The default management class uses the 30-day retention period.
Client schedules Client schedules are created in each policy domain with the start time that is specified during
configuration.
The type of backup schedule that is created is based on the type of client:
• File server schedules are set as incremental forever.
• Data protection schedules are set as full daily.
Some data protection schedules include command file names that are appropriate for the data
protection client.
For more information about the schedules that are predefined during configuration, see Appendix
D, “Using predefined client schedules,” on page 73.
[Link] The [Link] parameter defines whether the If you installed an IBM Spectrum Protect
kernel can swap application memory out of physical V8 server, set this parameter to 5. If you
random access memory (RAM). For more information installed a V7 server, set this parameter
about kernel parameters, see the Db2 product to 0.
information in IBM Documentation.
Procedure
To clean up your system by using the script, complete the following steps:
1. Edit the [Link] script by commenting out the exit on the first line.
For example:
2. Copy the [Link] script into the folder where the [Link] script is
located.
3. Issue the following command:
perl [Link]
48 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
Chapter 6. Completing the system configuration
Complete the following tasks after your IBM Spectrum Protect server is configured and running.
Procedure
• To update password information for the server and administrator, use server commands.
For more information, see the SET SERVERPASSWORD, UPDATE ADMIN, and UPDATE SERVER server
commands.
• To update the password for the Db2 instance owner, use the Linux operating system passwd
command.
• Create a system-level administrator. Then, remove or lock the administrator that is named ADMIN by
using the REMOVE ADMIN or LOCK ADMIN command.
• Change the password that is used to protect the server encryption key for database backup operations.
Issue the following command:
Attention: You must remember the password, or you will be unable to restore database
backups.
Procedure
1. Register newnode1 to the TSMSERVER1_FILE domain. Specify a value for the client node password, for
example, pw4node1. Set the MAXNUMMP parameter to 99:
2. To use a predefined client schedule, determine which schedule to associate newnode1 with by
querying the list of available schedules. Issue the QUERY SCHEDULE command.
The output lists all defined schedules. For example, the following output shows the details for the FILE
_INCRFOREVER_10PM schedule:
3. Define an association between newnode1 and the FILE _INCRFOREVER_10PM schedule. You must
specify the domain for the node and schedule.
For example:
4. Verify that newnode1 is associated with the correct schedule by issuing the QUERY ASSOCIATION
command.
For example, issue the following command, specifying the schedule domain and the schedule name:
The output shows that newnode1 is associated with the queried domain and schedule name.
5. Display details about the client schedule by issuing the QUERY EVENT command. Specify the domain
and name of the schedule for which you want to display events.
For example, issue the following command:
The output shows that the backup for newnode1 is scheduled, but has not yet occurred.
6. After you register a node and assign it to a schedule, configure the client and client schedule on the
client system and then start the scheduler daemon on the client system so that the backup operation
starts at the scheduled time.
To configure the client schedules that are predefined by the Blueprint configuration script, see
Appendix D, “Using predefined client schedules,” on page 73.
50 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
For more information about starting the client scheduler, see the IBM Spectrum Protect client
documentation in IBM Knowledge Center.
Procedure
1. Remove the oldest database backups.
For example, to remove the two oldest database backups, issue the following command:
2. Back up the current version of the database with the BACKUP DB command:
3. Locate the database backup file system with the most free space to use for the reorganization.
4. Complete the procedure for offline table reorganization. During this step, you might be prompted
to back up the database but it is unnecessary for you to do so. Follow the instructions in technote
1683633.
Monitor your system with the IBM Spectrum Protect Operations Center
For more information about the Operations Center, see the following topics.
Getting started with the Operations Center
Installing and upgrading the Operations Center
Monitoring with the Operations Center
Monitoring storage solutions
Review documentation
For documentation in IBM Knowledge Center, see the following links.
Tip: If available, you can display different versions of the same topic in IBM Knowledge Center by using
the versions menu at the top of the page.
IBM Spectrum Protect server and client software
• V8.1.16 documentation
IBM FlashSystem 5000 disk storage systems
IBM FlashSystem 5000 welcome page
IBM Elastic Storage System
• IBM Elastic Storage System
• IBM Spectrum Scale
Procedure
The following manual example assumes that two servers, TAPSRV01 and TAPSRV02, were configured
by using the blueprint specifications. The placeholders noted for passwords must match the value that
was provided for the server password during the initial configuration. This procedure sets up the data
replication so that client nodes' data is backed up to TAPSRV01 and this data is replicated to TAPSRV02.
These steps configure a single storage pool that is used for holding both backup data and replicated data.
You can also configure separate storage pools for backup data and replicated data.
1. Set up server-to-server communication.
On TAPSRV01, issue the following command:
If the test is successful, you see results similar to the following example:
3. Export policy definitions from TAPSRV01 to TAPSRV02. Issue the following command on TAPSRV01:
4. Define TAPSRV02 as the replication target of TAPSRV01. Issue the following command on TAPSRV01:
54 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
5. Enable replication for certain nodes or all nodes. To enable replication for all nodes, issue the following
command on TAPSRV01:
6. Define a storage rule to replicate data to the target replication server, TAPSRV02. To define the
replication storage rule, REPLRULE1, issue the following command on TAPSRV01:
7. Define an exception to the storage rule, REPLRULE1 to prevent replication of NODE1 by defining a
replication subrule. To define the replication subrule, REPLSUBRULE1, issue the following command
on TAPSRV01:
Note: You can replicate data from a source replication server to multiple target replication servers. You
must define multiple replication storage rules to configure different target replication servers. Follow
the instruction in step 6 to define a replication storage rule for the respective target replication server.
If required, follow the instruction in step 7 to define subrules to add exceptions for the respective
replication storage rules.
8. On each source replication server, activate the administrative schedule that the Blueprint configuration
script created to run replication every day. Issue the following command:
Restriction: Ensure that you complete this step only on source replication servers. However, if you
are replicating nodes in both directions, and each server is a source and a target replication server,
activate the schedule on both servers.
What to do next
To recover data after a disaster, follow the instructions in Repairing and recovering data in directory-
container storage pools.
58 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
Table 26. Data restore processes
Number of restore
Metric processes Limit
1 481 GB per hour
60 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
Table 32. Data movement
Number of restore
Metric processes Limit
1 813.4 GB per hour
62 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
Medium system - database workload
The database workload test included eight file systems. The following command was issued:
When running the 64 directory storage pool workload from four servers simultaneously, the total average
combined throughput from all servers exceeded 35,000 MB/sec.
When running the 12 directory database workload from four servers simultaneously, the total combined
average IOPS from all servers exceeded 130,000 IOPS.
64 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
Appendix B. Configuring the disk system by using
commands
You can use the IBM FlashSystem command line to configure storage arrays and volumes on the
disk system. Example procedures are provided for the 5015 (small), 5035 (medium), and 5200 (large)
systems.
Refer to Chapter 3, “Storage configuration blueprints,” on page 15 for layout specifications.
Small system
1. Connect to and log in to the disk system by issuing the ssh command. For example:
ssh superuser@your5010hostname
2. List drive IDs for each type of disk so that you can create the managed disk (MDisk) arrays in Step
“4” on page 65. Issue the lsdrive command. The output can vary, based on slot placement for the
different disks. The output is similar to the following example:
3. Create the MDisk groups for the IBM Spectrum Protect database and storage pool. Issue the
mkmdiskgroup command for each pool, specifying 256 for the extent size:
4. Create MDisk arrays by using mkdistributedarray commands. Specify the commands to add the
MDisk arrays to the data pools that you created in the previous step. For example:
mkvdisk -mdiskgrp stgpool_grp0 -size 3303398 -unit mb -name backup_00 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 3303398 -unit mb -name backup_01 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 15859710 -unit mb -name filepool_00 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 15859710 -unit mb -name filepool_01 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 15859710 -unit mb -name filepool_02 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 15859710 -unit mb -name filepool_03 -iogrp 0 -nofmtdisk
6. Create a logical host object by using the mkhost command. Specify the Fibre Channel WWPNs from
your operating system and specify the name of your host. To obtain the WWPNs from your system,
follow the instructions in “Step 1: Set up and configure hardware” on page 23.
For example, to create a host that is named hostone with a list that contains FC WWPNs
10000090FA3D8F12 and 10000090FA49009E, issue the following command:
7. Map the volumes that you created in Step “5” on page 66 to the new host. Issue the
mkvdiskhostmap command for each volume. For example, issue the following commands where
hostname is the name of your host:
Medium system
1. Connect to and log in to the disk system by issuing the ssh command. For example:
ssh superuser@your5010hostname
2. Increase the memory that is available for the RAIDs to 125 MB by issuing the chiogrp command:
3. List drive IDs for each type of disk so that you can create the MDisk arrays in Step “5” on page 67.
Issue the lsdrive command. The output can vary, based on slot placement for the different disks.
The output is similar to the following example:
IBM_Storwize:tapv5kg:superuser>lsdrive
id status use tech_type capacity enclosure_id slot_id drive_class_id
0 online member tier_nearline 5.5TB 1 26 0
1 online member tier_nearline 5.5TB 1 44 0
2 online member tier_nearline 5.5TB 1 1 0
3 online member tier_nearline 5.5TB 1 34 0
66 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
4 online member tier_nearline 5.5TB 1 20 0
5 online member tier_nearline 5.5TB 1 25 0
< ... >
91 online member tier_nearline 5.5TB 1 2 0
92 online member tier1_flash 1.7TB 2 4 1
93 online member tier1_flash 1.7TB 2 1 1
94 online member tier1_flash 1.7TB 2 3 1
95 online member tier1_flash 1.7TB 2 6 1
96 online member tier1_flash 1.7TB 2 5 1
97 online member tier1_flash 1.7TB 2 2 1
4. Create the MDisk groups for the IBM Spectrum Protect database and storage pool. Issue the
mkmdiskgroup command for each pool, specifying 1024 for the extent size:
5. Create MDisk arrays by using mkdistributedarray commands. Specify the commands to add the
MDisk arrays to the data pools that you created in the previous step.
For example:
6. Create the storage volumes for the system. Issue the mkvdisk command for each volume, specifying
the volume sizes in MB. For example:
mkvdisk -mdiskgrp stgpool_grp0 -size 30648320 -unit mb -name filepool_00 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 30648320 -unit mb -name filepool_01 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 30648320 -unit mb -name filepool_02 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 30648320 -unit mb -name filepool_03 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 30648320 -unit mb -name filepool_04 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 30648320 -unit mb -name filepool_05 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 30648320 -unit mb -name filepool_06 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 30648320 -unit mb -name filepool_07 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 30648320 -unit mb -name filepool_08 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 30648320 -unit mb -name filepool_09 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 30648320 -unit mb -name filepool_10 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 30648320 -unit mb -name filepool_11 -iogrp 0 -nofmtdisk
7. Create a logical host object by using the mkhost command. Specify the Fibre Channel WWPNs from
your operating system and specify the name of your host. To obtain the WWPNs from your system,
follow the instructions in “Step 1: Set up and configure hardware” on page 23.
For example, to create a host that is named hostone with a list that contains FC WWPNs
10000090FA3D8F12 and 10000090FA49009E, issue the following command:
Large system
1. Connect to and log in to the disk system by issuing the ssh command. For example:
ssh superuser@your5200hostname
2. Increase the memory that is available for the RAIDs to 125 MB by issuing the chiogrp command:
3. List drive IDs for each type of disk so that you can create the MDisk arrays in Step “5” on page 68.
Issue the lsdrive command. The output can vary, based on slot placement for the different disks.
The output is similar to what is returned for small and medium systems.
4. Create the MDisk groups for the IBM Spectrum Protect database and storage pool. Issue the
mkmdiskgroup command for each pool, specifying 1024 for the extent size:
5. Create arrays by using the mkdistributedarray command. Specify the commands to add the MDisk
arrays to the data pools that you created in the previous step.
For example:
6. Create the storage volumes for the system. Issue the mkvdisk command for each volume, specifying
the volume sizes in MB.
68 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
For example:
mkvdisk -mdiskgrp db_grp0 -size 858000 -unit mb -name db_00 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp db_grp0 -size 858000 -unit mb -name db_01 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp db_grp0 -size 858000 -unit mb -name db_02 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp db_grp0 -size 858000 -unit mb -name db_03 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp db_grp0 -size 858000 -unit mb -name db_04 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp db_grp0 -size 858000 -unit mb -name db_05 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp db_grp0 -size 858000 -unit mb -name db_06 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp db_grp0 -size 858000 -unit mb -name db_07 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp db_grp0 -size 858000 -unit mb -name db_08 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp db_grp0 -size 858000 -unit mb -name db_09 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp db_grp0 -size 858000 -unit mb -name db_10 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp db_grp0 -size 858000 -unit mb -name db_11 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp db_grp0 -size 563200 -unit mb -name alog -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 4200000 -unit mb -name archlog -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 18874368 -unit mb -name backup_00 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 18874368 -unit mb -name backup_01 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 18874368 -unit mb -name backup_02 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_00 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_01 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_02 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_03 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_04 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_05 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_06 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_07 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_08 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_09 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_10 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_11 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_12 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_13 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_14 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_15 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_16 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_17 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_18 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_19 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_20 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_21 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_22 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_23 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_24 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_25 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_26 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_27 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_28 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_29 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_30 -iogrp 0 -nofmtdisk
mkvdisk -mdiskgrp stgpool_grp0 -size 32856064 -unit mb -name filepool_31 -iogrp 0 -nofmtdisk
7. Create a logical host object by using the mkhost command. Specify the Fibre Channel WWPNs from
your operating system and specify the name of your host. For instructions about obtaining the WWPNs
from your system, see “Step 1: Set up and configure hardware” on page 23.
For example, to create a host that is named hostone with a list that contains FC WWPNs
10000090FA3D8F12 and 10000090FA49009E, issue the following command:
8. Map the volumes that you created in Step “6” on page 68 to the new host. Issue the
mkvdiskhostmap command for each volume. For example, issue the following commands where
hostname is the name of your host:
70 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
Appendix C. Using a response file with the Blueprint
configuration script
You can run the Blueprint configuration script in non-interactive mode by using a response file to set your
configuration choices.
Three response files are provided with the Blueprint configuration script. If you plan to set up a system by
using all default values, you can run the configuration script in non-interactive mode by using one of the
following response files:
Small system
[Link]
Medium system
[Link]
Large system
• Storwize systems: [Link]
• IBM Elastic Storage System systems: responsefile_ess.txt
The files are pre-filled with default configuration values for the small, medium, and large systems and do
not require updates.
If you want to customize your responses for a system, use the following table with your “Planning
worksheets” on page 10 to update one of the default response files. The values that are used in the
response file correspond to values that you recorded in the Your value column of the worksheet.
72 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
Appendix D. Using predefined client schedules
The Blueprint configuration script creates several client schedules during server configuration. To use
these schedules, you must complete configuration steps on the client system.
Table 33 on page 73 lists the predefined schedules that are created on the server. The schedule names
and descriptions are based on the default backup schedule start time of 10 PM. If you changed this start
time during server configuration, the predefined client schedules on your system are named according to
that start time. Information about updating client schedules to use with the IBM Spectrum Protect server
is provided in the sections that follow the table.
For complete information about scheduling client backup operations, see your client documentation.
commmethod tcpip
tcpport 1500
TCPServeraddress <IBM Spectrum Protect server name>
nodename <node name>
passwordaccess generate
vmbackuptype hypervfull
2. For each virtual machine that you want to back up, create a separate script file. A unique file is
needed to ensure that a log is saved for each backup. For example, create a file that is named
[Link]. Include the backup command, the name of the virtual machine, the client options file,
and the log file that you want to create on the first line. On the second line, include the word exit.
For example:
Repeat this step for each virtual machine that you want to back up.
3. Create a backup schedule file, for example, hv_backup.cmd.
4. Add an entry to hv_backup.cmd for each virtual machine script file that you created. For example:
start [Link]
choice /T 10 /C X /D X /N > NUL
start [Link]
choice /T 10 /C X /D X /N > NUL
start [Link]
choice /T 10 /C X /D X /N > NUL
[Link]
5. Issue the UPDATE SCHEDULE server command to update the predefined HYPERV_FULL_10PM
schedule. Specify the full path for the Hyper-V backup schedule file location in the OBJECTS
parameter.
74 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
IBM Spectrum Protect for Virtual Environments
To create new schedules, use the Data Protection for VMware vCenter plug-in GUI.
Overview
The IBM Spectrum Protect family of products includes the following major accessibility features:
• Keyboard-only operation
• Operations that use a screen reader
The IBM Spectrum Protect family of products uses the latest W3C Standard, WAI-ARIA 1.0
([Link]/TR/wai-aria/), to ensure compliance with US Section 508 and Web Content Accessibility
Guidelines (WCAG) 2.0 ([Link]/TR/WCAG20/). To take advantage of accessibility features, use the
latest release of your screen reader and the latest web browser that is supported by the product.
The product documentation in IBM Documentation is enabled for accessibility.
Keyboard navigation
This product uses standard navigation keys.
Interface information
User interfaces do not have content that flashes 2 - 55 times per second.
Web user interfaces rely on cascading style sheets to render content properly and to provide a usable
experience. The application provides an equivalent way for low-vision users to use system display
settings, including high-contrast mode. You can control font size by using the device or web browser
settings.
Web user interfaces include WAI-ARIA navigational landmarks that you can use to quickly navigate to
functional areas in the application.
Vendor software
The IBM Spectrum Protect product family includes certain vendor software that is not covered under the
IBM license agreement. IBM makes no representation about the accessibility features of these products.
Contact the vendor for accessibility information about its products.
TTY service
800-IBM-3383 (800-426-3383)
(within North America)
For more information about the commitment that IBM has to accessibility, see IBM Accessibility
([Link]/able).
For license inquiries regarding double-byte character set (DBCS) information, contact the IBM Intellectual
Property Department in your country or send inquiries, in writing, to:
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs
in any form without payment to IBM, for the purposes of developing, using, marketing or distributing
application programs conforming to the application programming interface for the operating platform
for which the sample programs are written. These examples have not been thoroughly tested under
all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these
programs. The sample programs are provided "AS IS", without warranty of any kind. IBM shall not be
liable for any damages arising out of your use of the sample programs.
Each copy or any portion of these sample programs or any derivative work must include a copyright
notice as follows: © (your company name) (year). Portions of this code are derived from IBM Corp. Sample
Programs. © Copyright IBM Corp. _enter the year or years_.
Trademarks
IBM, the IBM logo, and [Link]® are trademarks or registered trademarks of International Business
Machines Corp., registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at
"Copyright and trademark information" at [Link]/legal/[Link].
Adobe is a registered trademark of Adobe Systems Incorporated in the United States, and/or other
countries.
Linear Tape-Open, LTO, and Ultrium are trademarks of HP, IBM Corp. and Quantum in the U.S. and other
countries.
Intel and Itanium are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the
United States and other countries.
The registered trademark Linux is used pursuant to a sublicense from the Linux Foundation, the exclusive
licensee of Linus Torvalds, owner of the mark on a worldwide basis.
Microsoft, Windows, and Windows NT are trademarks of Microsoft Corporation in the United States, other
countries, or both.
Java™ and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or
its affiliates.
Red Hat®, OpenShift®, Ansible®, and Ceph® are trademarks or registered trademarks of Red Hat, Inc. or its
subsidiaries in the United States and other countries.
UNIX is a registered trademark of The Open Group in the United States and other countries.
84 Notices
VMware, VMware vCenter Server, and VMware vSphere are registered trademarks or trademarks of
VMware, Inc. or its subsidiaries in the United States and/or other jurisdictions.
Notices 85
86 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
Index
A directories
creating 29, 30
accessibility features 81 disability 81
ACTIVELOGSIZE server option 44 DISABLEREORGTABLE server option 51
ALLOWREORGINDEX server option 51 disk
ALLOWREORGTABLE server option 51 configuring with command line 65
ARCHLOGCOMPRESS server option 44 documentation 53
[Link] file 44
B
E
BIOS settings 23
blueprint EXPINTERVAL server option 44
customization 77
large system 20, 21, 31
medium system 18
F
small system 17 file systems
blueprint configuration script creating 29, 30
compression option 41 planning for 10
configuring with 41
planning for 10
response file 71 H
testing 48
hardware requirements 5, 7–9
troubleshooting 48
C I
IBM Elastic Storage System
COMMTIMEOUT server option 44
configuring 31
compression 41
hardware requirements 9
configuration
storage blueprint 21
clean up script 48
IBM FlashSystem 5015
customizing 77
storage configuration for small systems 17
prerequisites 5
IBM FlashSystem 5035
configuring
storage configuration for medium systems 18
Db2 database 41
IBM FlashSystem 5200)
disk systems 23
storage configuration for large systems 20
file systems 29, 30
IBM Knowledge Center 53
hardware 23
IBM Spectrum Protect directories
IBM Elastic Storage System 31
planning for 10
IBM Spectrum Protect server 41
IBM Spectrum Protect server
RAID arrays 15, 23
cleaning up after a failed configuration attempt 48
Storwize V5010 system 65
configuring 41
Storwize V5030 system 65
schedules 41
TCP/IP settings 25
IBM Spectrum Scale
volumes 15, 23
configuring 31
installing 31
D IDLETIMEOUT server option 44
installing
data replication 53 IBM Spectrum Protect backup-archive client 37
database IBM Spectrum Protect server 38, 39
configuration of 41 IBM Spectrum Scale 31
reorganizing tables and indexes 51 obtaining IBM Spectrum Protectserver installation
Db2 -locklist parameter 43 packages 38
DEDUPDELETIONTHREADS server option 44 operating system 25
DEDUPREQUIRESBACKUP server option 44 Red Hat Enterprise Linux 25
DEFINE ASSOCIATION server command 49 iostat command 34
DEVCONFIG server option 44
Index 87
K S
kernel parameters 41 schedules
keyboard 81 client 41
predefined client 73
server 41
L script
ldeedee program 34 blueprint configuration script 41
Lenovo ThinkSystem SR650 configuration clean up 48
hardware requirements 7–9 storage preparation 30
Linux commands workload simulation tool 34
dd command 34 server
iostat command 34 determining the size of 3
passwd 49 installing 38, 39
obtaining installation packages 38
server commands
M DEFINE ASSOCIATION 49
QUERY EVENT 49
MAXSESSIONS server option 44
REGISTER NODE 49
MDisk 15
SET ACTLOGRETENTION 43
mkdir command 30
SET EVENTRETENTION 43
mkfs command 30
SET MAXSCHEDSESSIONS 43
mount command 30
server options
multipath I/O for disk storage 28
ACTIVELOGSIZE 44
ALLOWREORGINDEX 45
N ALLOWREORGTABLE 45
ARCHLOGCOMPRESS 44
NUMOPENVOLSALLOWED server option 44 COMMTIMEOUT 44
DEDUPDELETIONTHREADS 44
O DEDUPREQUIRESBACKUP 44
DEVCONFIG 44
Operations Center 53, 79 DIRECTIO 44
DISABLEREORGINDEX 45
DISABLEREORGTABLE 45
P EXPINTERVAL 44
passwd command 49 IDLETIMEOUT 44
passwords MAXSESSIONS 44
default 49 NUMOPENVOLSALLOWED 44
updating 49 REORGBEGINTIME 45
performance REORGDURATION 45
evaluating 57 VOLUMEHISTORY 44
extra small system 57 SET ACTLOGRETENTION server command 43
large system 60 SET EVENTRETENTION server command 43
medium system 59 SET MAXSCHEDSESSIONS server command 43
small system 58 software prerequisites 10
testing 34 storage configuration
workload simulation tool 61 planning for 10
performance results 57 storage layout
planning worksheet 10 large system 20, 21, 31
medium system 18
small system 17
Q storage preparation 29
Storwize V5010 systems
QUERY EVENT server command 49 hardware requirements 7
Storwize V5030 systems
R hardware requirements 8
Supermicro SuperServer 2029U-E1CRT
RAID arrays 15 hardware requirements 7
Red Hat Enterprise Linux x86-64 10 system setup 23
REGISTER NODE server command 49
registering nodes to the server 49
replication storage rules 53 T
replication subrules rules 53 tasks for configuration 23
testing system performance 34
88 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
total managed data 3
troubleshooting 79
U
ulimits 43
V
VDisk 31
virtual hardware requirements 6
VOLUMEHISTORY server option 44
W
What's new vii
workload simulation tool 34, 61
Index 89
90 IBM Spectrum Protect: Blueprint and Server Automated Configuration for Linux x86
IBM®