PDF TNPM Installguide
PDF TNPM Installguide
2
Wireline Component
Document Revision R2E2
Installation Guide
IBM
Note
Before using this information and the product it supports, read the information in “Notices” on page 315.
Contents v
Tivoli Netcool Performance Manager component Appendix J. Moving DataView content
problems . . . . . . . . . . . . . . . 308 between Dashboard Application
Topology Editor problems . . . . . . . . . 308
Services Hub servers . . . . . . . . 313
Tivoli Netcool Performance Manager DB2 database
The synchronize command . . . . . . . . . 313
schema fails . . . . . . . . . . . . . . 308
Telnet problems . . . . . . . . . . . . 309
Java problems . . . . . . . . . . . . . 309 Notices . . . . . . . . . . . . . . 315
Testing connectivity to the database . . . . . . 310 Trademarks . . . . . . . . . . . . . . 317
Testing external procedure call access . . . . . 310 Terms and conditions for product documentation 318
The Installing Tivoli Netcool Performance Manager - Wireline Component tells you
how to install and configure Tivoli Netcool Performance Manager by using Oracle
and DB2® databases.
Note: If you are upgrading Tivoli Netcool Performance Manager, see the IBM
Tivoli Netcool Performance Manager Upgrade Guide - Wireline component.
Important: Before installing Tivoli Netcool Performance Manager, you are advised
to read the Installing Tivoli Netcool Performance Manager - Wireline Component.
Release notes can contain information specific to your installation that is not
contained in this guide. Failure to consult the release notes might result in a
corrupted, incomplete, or failed installation.
Intended audience
This information is intended for:
The following figure shows the different Tivoli Netcool Performance Manager
modules.
If you are a user of the C shell or Tcsh, make the necessary adjustments in the
commands shown as examples throughout this manual.
Support information
If you have a problem with your IBM software, you want to resolve it quickly. IBM
provides the following ways for you to obtain the support you need:
Online
Access the IBM Software Support site at http://www.ibm.com/software/
support/probsub.html .
Typeface conventions
This publication uses the following typeface conventions:
Bold
v Lowercase commands and mixed case commands that are otherwise
difficult to distinguish from surrounding text
v Interface controls (check boxes, push buttons, radio buttons, spin
buttons, fields, folders, icons, list boxes, items inside list boxes,
multicolumn lists, containers, menu choices, menu names, tabs, property
sheets), labels (such as Tip:, and Operating system considerations:)
v Keywords and parameters in text
Italic
v Citations (examples: titles of publications, diskettes, and CDs)
v Words defined in text (example: a nonswitched line is called a
point-to-point line)
v Emphasis of words and letters (words as words example: "Use the word
that to introduce a restrictive clause."; letters as letters example: "The
LUN address must start with the letter L.")
v New terms in text (except in a definition list): a view is a frame in a
workspace that contains data.
v Variables and values you must provide: ... where myname represents....
Monospace
v Examples and code examples
v File names, programming keywords, and other elements that are difficult
to distinguish from surrounding text
v Message text and prompts addressed to the user
v Text that the user must type
v Values for arguments or command options
Bold monospace
v Command names, and names of macros and utilities that you can type
as commands
v Environment variable names in text
v Keywords
Tivoli Netcool Performance Manager 1.4.2 supports both Oracle and DB2
databases.
v servers
You can work with Professional Services to plan and size the deployment of Tivoli
Netcool Performance Manager components in your environment.
Co-location rules
Allowed component deployment numbers and co-location rules.
Table 1 lists how many of each component can be deployed per Tivoli Netcool
Performance Manager system and whether multiple instances can be installed on
the same server.
In this table:
v N - Depends on how many subchannels there are per channel, and how many
channels there are per system. For example, if there are 40 subchannels per
channel and 8 channels, theoretically N=320. However, the practical limit is
probably much lower.
v System - The entire Tivoli Netcool Performance Manager system.
v Per host - A single physical host can be partitioned using zones, which
effectively gives you multiple hosts.
Note: All CME, DLDR, FTE, and LDR components within a channel must share
the same filesystem.
Table 1. Co-location rules
Co-Location
Constraints
Number of Instances Co-Location Supported by
Component Allowed Constraints Deployer?
AMGR One per host that Yes
supports
DataChannel
components
BCOL v N per system Yes
v One per
corresponding
subchannel
CME One per subchannel Filesystem Yes
CMGR One per system Yes
Database One per system Yes
Database channel One per Yes
DataChannel;
maximum of 8
Chapter 1. Introduction 3
Table 1. Co-location rules (continued)
Co-Location
Constraints
Number of Instances Co-Location Supported by
Component Allowed Constraints Deployer?
DataLoad (SNMP v N per system Yes
collector)
v One per
corresponding
subchannel
v One per host
DataMart v N per system Yes
v One per host
DataView v N per system One per system.
v One per host
Discovery Server v N per system Co-locate with Yes
corresponding
v One per host
DataMart
DLDR One per channel Filesystem Yes
FTE One per subchannel Filesystem Yes
HAM N +M per system, Yes
where N is the
number of collectors
that HAM is
monitoring, and M is
the number of
standby collectors
LDR One per channel Filesystem Yes
Log One per system Yes
UBA (simple) v N per system Yes
v One per
corresponding
subchannel
UBA (complex) Pack-dependent Pack-dependent Pack-dependent
v In the Logical view of the Topology Editor, the DataChannel component contains
the subchannels, LDR, and DLDR components, with a maximum of 8 channels
per system. The subchannel contains the collector, FTE, and CME, with a
maximum of 40 subchannels per channel.
Inheritance
Inheritance is the method by which a parent object propagates its property values
to a child component.
The following rules should be kept in mind when dealing with these properties.
v A Child Property can be read only, but is not always.
v If the Child Property is not read only, then it can be changed to a value different
from the Parent Property.
v If the Parent Property changes, and the Child and Parent properties were the
same before the change, then the child property will be changed to reflect the
new Parent Property value
Note: When performing an installation that uses non-default values, that is,
non-default usernames, passwords and locations, it is recommended that you
check both the Logical view and Physical view to ensure that they both contain the
correct values before proceeding with the installation.
Example
As an example of how a new component inherits property values:
The Disk Usage Server (DUS) is a child component of the Host object. The DUS
Remote User property inherits its value from the Host PV User Property on
creation of the DUS. The DUS property value will be taken from the Host property
value.
If we change the Host PV User Property value, it gets pushed down to the DUS
Remote User property value, updating it. The associated Default Value is also
updated.
If we change the DUS Remote User property value, that is the child value, it does
not propagate up to the host; the parent Host PV User Property value remains
unchanged.
Now the child and parent properties are out of sync, and if we change the parent
property value it is not reflected in the child property, though the default value
continues to be updated.
ServerTime synchronization
Sever synchronization requirements.
Tivoli Netcool Performance Manager requires that the clocks on all Solaris servers
running Tivoli Netcool Performance Manager modules be synchronized. IBM
recommends using NTP (or equivalent) to keep all Tivoli Netcool Performance
Manager synchronized to within 500 milliseconds.
Multiple Disk Usage Servers can be configured per host; therefore, allowing
multiple DataChannel directories to exist on a single host. There are two major
reasons why a user may want to configure multiple Disk Usage Servers:
Chapter 1. Introduction 5
Disk space is running low
Disk space may be impacted by the addition of a new DataChannel
component. In which case, the user may want to add a new file system
managed by a new Disk Usage Server
Separate disk quota management
The user may want to separately manage the quotas assigned to discrete
DataChannel components. For more information, see “Disk quota
management.”
The user can assign the management of a new file system to a Disk Usage Server
by editing the local_root_directory property of that Disk Usage Server using the
Topology Editor. The user can then add DataChannel components to the host, and
can assign the component to a Disk Usage Server, either in the creation wizard or
by editing the DUS_NUMBER property inside the component.
The addition of a Disk Usage Server endeavors to make the process of assigning
space to a component much easier than it has been previously. No longer is a user
required to calculate the requirements of each component and assign that space
individually, but components now work together to more effectively utilize the
space they have under the Disk Usage Server. Also, the user is relieved of trying to
figure out which component needs extra space and then changing the quota for
that component. Now, the user can just change the quota of the DUS and all
components on that Disk Usage Server will get the update and share the space on
an as needed basis.
Flow control
Flow Control description.
Optimized flow control further eliminates problems with component level quotas.
Each component holds on to only a five hours of input and output, and once it has
reached this limit, it stops processing until the downstream component picks up
some of the data. This avoids the cascading scenario where one component stops
processing and the components feeding it begin to stockpile files, which results in
the quota being filled and causes all components to shut down because they have
run out of file space.
DataChannel components can only be added to hosts that include a Disk Usage
Server.
DataLoad
The following sections provide information on the supported architecture
configurations of DataLoad.
Collectors:
Collectors description.
The DataLoad collector takes in the unrefined network data and stores it in a file
that Tivoli Netcool Performance Manager can read. This file is known as a binary
object format file (BOF).
You can install multiple SNMP DataLoad collectors on the same host
In Tivoli Netcool Performance Manager 1.4.2, you can have multiple collectors per
host. This helps in reducing the server resources required. In certain customer sites,
they have up to 40 SNMP collectors that required 40 hosts (any of which might be
virtual hosts). In the current scenario multiple collectors are allowed on a single
host, which reduces the number of servers (physical or virtualized) but does not
change the disk or CPU requirements. A maximum of 16 collectors can be
configured on a single host. However, each multiple collector that is added
requires 4 GB RAM for efficient functioning.
This makes the solution cost-effective with minimum changes to the topology to
accommodate this for service providers with their current network and also for
service providers with growing networks.
CPU improvements came in the form of faster cores but now come as multiple
cores. This change in computer hardware architecture drives changes in how Tivoli
Netcool Performance Manager scales. Tivoli Netcool Performance Manager used to
scale by adding “commodity hardware” so you did not frequently encounter the
scenario where it was not necessary to put multiple SNMP DataLoad collectors on
a single box but now the “commodity hardware” is frequently a 4-16 core machine.
So this change adds value to Tivoli Netcool Performance Manager scalability.
Chapter 1. Introduction 7
Installation or topology considerations:
The DataLoad modules can be loaded on lightweight servers and placed as close to
the network as possible (often inside the network firewall). Because a DataLoad
module does not contain a database, the hardware can be relatively inexpensive
and can still reliably handle high volumes of data. The number of collectors is in
turn driven by the number of required Technology Packs.
The number of collectors in your system affect the topology configuration. You can
have multiple BULK collectors, UBA or BCOL, on a single host. Starting from
Tivoli Netcool Performance Manager 1.4.2, you can add and configure multiple
SNMP collectors on the same host. Following is the server deployment for multiple
SNMP DataLoad collectors.
Two servers with four DataChannels and four SNMP Collectors Topology.
DataChannel subchannels:
Subchannel support.
DataChannel remote:
This option is primarily intended for Managed Network Service Providers who
need to place the DataChannel component at an end-user site, co-located with the
data collector. The benefits of the DataChannel Remote option are:
v Allows Managed Network Service Providers to better align costs with revenues
as they scale their systems to support more end customers.
v Improves distributed Fault management
v Closer to alarm receiver and correlation
v Threshold detection resistant to disconnection
v Reduces hardware costs
v Leverages available CPU if remote DataLoad under loaded (unused CPU can be
used by DataChannel remote - CME)
Please refer to “SNMP version support” on page 26 for more information about
this option.
DataChannel "Standard":
DataChannel "Standard", usually deployed on the central site. This requires the
standard DataChannel module.
Chapter 1. Introduction 9
For firewall configuration recommendations, see “Firewall configuration” on page
16.
In this mode, Tivoli Netcool Performance Manager allows the CME to be installed
on a remote system away from the rest of the DataChannel components (such as
customer premise equipment).
The DataChannel Remote system is composed of the following elements and obeys
the stated execution order:
1. Collector
2. File Transfer Engine
3. Complex Metric Engine
All DataChannel Remote elements must be installed on the same server and file
system.
Multiple DataChannel Remote systems can plug into the DataChannel system,
which is composed of the following elements and obeys the following order of
execution:
This section describes some basic scenarios for introducing Tivoli Netcool
Performance Manager into the Solaris 10 operating environment, and options for
deploying Tivoli Netcool Performance Manager modules within Solaris 10
containers. It is not an attempt to describe all possible deployment options.
Note: Do not install more than one DataLoad SNMP collector in any given Solaris
10 container. Installing two or more collectors in a container will cause
performance to degrade due to those collectors competing for CPU resources.
With the one exception noted above, all deployments of Tivoli Netcool
Performance Manager modules within Solaris 10 containers are possible - for
example:
v All modules can be installed within a single container on the same Solaris 10
server.
v Each module can be installed in a separate container on separate Solaris 10
servers.
v Any combination of module deployments in between the above deployments is
also possible (with the exception of the SNMP collector restriction noted above).
Chapter 1. Introduction 11
you to adapt the rules for Elements that do not support the expected MIB
variables (the one Tivoli Netcool Performance Manager uses to implement the
uniqueness and correlation rules).
v The Inventory GUI supports several features, typically implemented with hook
scripts. The new architecture makes the hook scripts simpler and more
maintainable.
v Log messages issued by the Discovery process now comply with the centralized
LOG mechanism (that is, support has been added for logging to syslog).
The following sections explain the supported Inventory deployments (BULK and
SNMP) and help you making deployment choices related to the SNMP Discovery
server.
For performance reasons. This configuration is unlikely given the low impact of
the Inventory process on the overall system.
Deployment configurations.
Chapter 1. Introduction 13
DataChannel Remote and Bulk Collector remote:
Deployment configurations.
Deployment configurations.
Deployment configurations.
Chapter 1. Introduction 15
DataChannel and SNMP Collector remote, and SNMP inventory central:
Deployment configurations.
Figure 8: DataChannel and SNMP Collector Remote, and SNMP Inventory Central
Related information:
Firewall configuration:
Firewall Configuration.
DataChannel standard:
This section explains Tivoli Netcool Performance Manager firewall support, and
the protocols to open through them. The direction of the connection opening is
shown by the following arrows.
Note: Modules are also centrally managed, so the management protocols also need
to open through the firewalls.
As shown in the following figure, aside from file transfer and database
connectivity, management protocols need to pass through the firewalls. DataMart,
using the internal 3002 protocol, manages SNMP Collectors. The Channel
Management modules, through CORBA, manage the Bulk Collectors and the
DataChannel modules. Additionally, these modules log through UDP to a remote
logger.
Note: You can use a different port other than port 3002.
The Topology Editor can be used to statically set many of these ports.
Important: The Topology Editor can set the CORBA port for the management
components (except the log server, which does not use CORBA), but not for the
regular components (CMEs and FTEs, for example). Normally, the CORBA ports
for regular components are dynamic, because they communicate only with the
AMGR, which is on the same machine and not affected by firewalls.
When you upgrade from an existing installation or installing for the first time, you
can configure the static ports by using the Topology Editor.
Note: If you have an installation configuration file, you can use it when you run
the Topology Editor. Or, you can re-enter all relevant information and save the
installation configuration file and use it for the next time you upgrade.
Procedure
1. Determine which ports you can set statically. For more information,
see.“Choosing static ports” on page 18. Open these ports in the appropriate
firewalls.
2. Set the ports for the following protocols in the Topology Editor:
v Log Server port
v IOR Server port
Chapter 1. Introduction 17
v Name service port
v Channel Manager CORBA port
v For each host: the Application Manager CORBA port
v For each Sub-Channel component: the trap port
Note: There are other protocols that you must open in the firewall that is not
configured here.
3. Install the DataChannel. Save the installer configuration file.
4. For each Bulk Collector that is running in DISCOVERY_MODE==noinventory,
manually edit the topology and add 3002_SERVICE_PORT for each Collector. For
example, BCOL.2.36.3002_SERVICE_PORT=4003
5. Restart the channel management and the channels.
DataView and DataMart also need ports that are opened in the firewall
see,“Standard Deployment of DataChannel” on page 19. See the DataView and
DataMart documentation for information about configuring ports, if necessary.
What to do next
When the FTE and CME components are on remote DataChannel server, a static
port must be configured. Then, the FTE is able to communicate with PBL from the
local DataChannel.
Note: This configured port must be allowed to access through the firewall.
You need to choose which ports to use. Ask your Network Administrator for these
ports or choose the ports yourself, based on “Standard Deployment of
DataChannel” on page 19. Some ports used by Tivoli Netcool Performance
Manager are standard ports, like FTP. Other ports you need to choose. You can use
the same port on different machines. For example, you can assign the same port to
all Application Managers.
There are several places you can look to see if a given port is already in use. First
choose the range, then check to see if it is in use.
Active and Passive FTP (Alcatel-Lucent 5620 SAM technology pack only):
The Alcatel-Lucent 5620 SAM Technology Pack supports both active and passive
FTP through the firewall.
Use the following guidelines when specifying passive or active data transfer mode
in URI (Universal Resource Identifier) specifications:
v By default, the Alcatel-Lucent 5620 SAM uses active connections (traditional FTP
mode). You can, however, explicitly specify active data transfer mode by
appending ;mode=active to the end of the FTP (File Transfer Protocol) URI
(Universal Resource Identifier) as in the following example:
ftp://loginname:password@hostname//full/path/to/resource;mode=active
v To use passive data transfer mode, you must explicitly specify passive mode by
appending ;mode=passive to the end of the FTP URL as in ther following
example:
ftp://loginname:password@hostname//full/path/to/resource;mode=passive
The following table lists the module-to-protocol mappings for the standard
deployment of DataChannel.
Chapter 1. Introduction 19
Port Type /
Dev DL DC DM DB DV WEB FM Module/ Pair Protocol Port # Dir. Property Setting in Topology Editor
X X SNMP FTP for Static, ◄
DataLoad - collection non-
DataChannel configurable
TCP 20, 21 or
sFTP/TCP 22
X X SNMP Channel Static, ◄ For each SNMP Collector:
DataLoad - Manager configurable PVM_SNMP_COLL_SSDPORT=3002
DataChannel access to TCP 3002 PVM_SSDPORT=3002 (These are
SNMP Coll. entered using SERVICE_PORT in the
For Real Time Topology Editor: DataChannel >
Admin Components > Channel
Manager and DataChannel > Admin
Components > CORBA Naming
Server Host)
X X Bulk DataLoad FTP metric Static, ◄ See the Technical Note Tivoli
- DataChannel collection Or non- Netcool Performance Manager
SFTP configurable DataChannel Secure File Transfer
TCP 20, 21 Installation in the file tech note -
SFTP 22 sftp.pdf on the CD.
X X LOG UDP Static, ► DC configuration:
configurable GLOBAL.LOG_PORT
UDP 25000
X X CNS for Static, ► DC configuration:
acquiring the configurable GLOBAL.ORB_NAMESERVICE_PORT
name service TCP 45107
X X Channel Name Static, ► DC configuration:
Service ORB configurable CNS.CORBA_PORT
TCP 9005
X X Channel Static, ► DC configuration:
Manager ORB configurable CMGR.CORBA_PORT
TCP 9001
X X Application Static, ◄ DC configuration:
Manager ORB configurable AMGR.<hostname>. CORBA_PORT
TCP 9002 (AMGR.<hostname> is a property is
taken from the location chosen for
the Disk Usage Server host.)
X X Trapd service Static, ► DC configuration:
configurable CME.x.y.TRAPD_PORT
TCP 162
X X SNMP Automatic Static, ◄ On each SNMP Collector:
DataLoad - Discovery configurable PVM_SNMP_COLL_ SSDPORT=3002
DataMart TCP 3002 PVM_SSDPORT=3002 (These are
entered using SERVICE_PORT in the
X X TCP for Static, ◄
Topology Editor: DataChannel >
configuration configurable
Admin Components > Channel
reload trigger TCP 3002
Manager and DataChannel > Admin
Components > CORBA Naming
Server Host)
X X Bulk DataLoad FTP inventory Static, ►► See the Technical Note Tivoli
-DataMart Or SFTP non- Netcool Performance Manager
configurable DataChannel Secure File Transfer
TCP 20, 21 Installation in the file tech note -
SFTP 22 sftp.pdf on the CD.
X X TCP for Static, ◄ DC configuration:
configuration configurable BCOL.x.x.3002_SERVICE_PORT
reload trigger TCP 3002
The following table lists the module-to-protocol mappings for integrating with the
Alcatel 5620 NM.
Chapter 1. Introduction 21
X X 5620 - FTP and FTP Static, non ◄
DataLoad Data for -configurable
Bulk Load binary TCP 20, 21
Balancer transfer
(BLB) during
remote
installation
The following table lists the module to protocol mappings for integrating with the
Alcatel-Lucent 5620 SAM.
The following table lists the module-to-protocol mappings for integrating with the
Cisco CWM.
Radcom Probe:
Radcom Probe.
The following table lists the module-to-protocol mappings for integrating with the
Radcom Probe.
This section describes the additional ports to open for the DataChannel "Remote
Channel Option."
Chapter 1. Introduction 23
The following table lists the module to protocol for the DataChannel remote
option. For more information, see “SNMP version support” on page 26.
v DCR represents the Remote server supporting DataLoad and the Sub-Channel
component of DataChannel.
v DCL represents the other modules of the DataChannel (Loader), deployed
Locally.
v Fault Management external station - This is not a Tivoli Netcool Performance
Manager module.
v There is no default line created in the configuration file by the installer, but will
be leveraged if configured.
DataMart
The following sections provide information on the supported architecture
configurations of DataMart.
SNMP inventory:
This section highlights the various protocol exchanged between Tivoli Netcool
Performance Manager components. It can be used to figure out the relevant
firewall configuration depending on where the various components reside (SNMP
Discovery process, DataChannel).
Note: For most deployments, where the Discovery server is on the central site,
these ports do not have to be opened, except the one described in the second row
of
Chapter 1. Introduction 25
SNMP version support:
Tivoli Netcool Performance Manager can automatically detect the SNMP version
supported by each element on the network, between SNMP Versions 1, 2c, and 3.
Tivoli Netcool Performance Manager supports the following PDU sets for the
following SNMP versions:
v SNMP Version 1 - Get / GetNext / Set
v SNMP Version 2c - Get / GetNext / GetBulk / Set
v SNMP Version 3 - Get / GetNext / GetBulk / Set, MD5 and SHA-1
authentication (with AES and DES private encryption)
Note: For more information about SNMPv3 support, see Installing libcrypto.so in
Installing Tivoli Netcool Performance Manager - Wireline Component.
Technology packs
Technology packs description.
If you are creating a UBA collector, you must associate it with a specific technology
pack.
Note: General installation information for technology packs can be found in the
Installing and Configuring Technology Packs, pack-specific installation information is
also provided. Consult both sets of documentation for important installation or
topology information.
Chapter 1. Introduction 27
Intermediate topology scenario
An intermediate example topology scenario.
Table 3. Tivoli Netcool Performance Manager intermediate topology scenario
Tivoli Netcool Performance
Manager Components
Server Name Hosted Notes
Chapter 1. Introduction 29
Table 4. Tivoli Netcool Performance Manager advanced topology scenario (continued)
Tivoli Netcool Performance
Manager Components
Server Name Hosted Notes
High Availability
High Availability description.
The following High Availability (HA) documents are available for download from
the Integrated Service Management Library (ISML), https://www-304.ibm.com/
software/brandcatalog/ismlibrary/.
Tivoli Netcool Performance Manager Wireline Cluster Operations Guide 1.4.2
Describes an example system that was configured to provide high
availability.
For more information about High Availability Manager, see Chapter 9, “Using the
High Availability Manager,” on page 201.
The HAM must be put on the same machine as the channel manager.
Chapter 1. Introduction 31
32 IBM Tivoli Netcool Performance Manager: Installation Guide
Chapter 2. Requirements
Details of all Tivoli Netcool Performance Manager requirements.
The IBM Prerequisite Scanner can be used to check for the presence of Tivoli
Netcool Performance Manager requirements.
Product codes that are used to support Tivoli Netcool Performance Manager are as
follows:
This table shows the environment variables that must be set for
Tivoli Netcool Performance Manager after the Oracle database is installed to verify
that the installation is successful:
To use the IBM Prerequisite Scanner on Tivoli Netcool Performance Manager 1.4.2,
you must install the 1.4.2.0-TIV-TNPM-IF0005 interim fix from IBM Fix Central.
Note: You must run the IBM Prerequisite Scanner tool as db2 user. If you are using
a non-default DB2 username, you must use the same name to run this tool.
Pre-requisite software
Table 5. Pre-requisite software
Product Version
Operating system 64-bit only v Red Hat Enterprise Linux (RHEL) 6.x and 7.2 if you
are doing a fresh install
v Red Hat Enterprise Linux (RHEL) 6.x for Tivoli
Netcool Performance Manager upgrade
v IBM AIX 6.1 or 7.1
v Oracle Solaris 10 update 6
Note: For the required packages, see
“Supported operating systems and modules” on page
42.
Databases
Tivoli Netcool Performance Manager has the following minimum requirements for
the AIX environment:
v 3 x POWER7 (Quad Core) 3.0 GHz processor or or 2 x POWER8 (Octo Core)
2.32 GHz processor
v 16 GB Memory
v 2-x-146 GB HDD
Chapter 2. Requirements 35
v 4-GB RAM; 146 GB disk space.
Tivoli Netcool Performance Manager has the following minimum requirements for
the Linux environment:
v 3 x Intel Xeon 5500/5600 series processors (quad-core), 2.4 GHz or greater.
v 16 GB memory
v 2 x 300 GB HDD
When you install Oracle, the following host must confirm the
previously stated hardware requirements for AIX, Solaris, and Linux. However, the
database may experience problems if sufficient swap space is not provided.
v The same amount of Swap as RAM must be present on the Oracle server host in
a distributed Tivoli Netcool Performance Manager system.
v Twice as much Swap as RAM must be present on the Oracle server host for a
Tivoli Netcool Performance Manager proof of concept install.
Table 6. RAM Swap Space
RAM Swap Space
Between 1 GB and 2 GB 1.5 times the size of RAM
Between 2 GB and 16 GB Equal to the size of RAM
More than 16 GB 16 GB
As your business requirements change, you must reassess your physical database
design. This reassessment must include periodic revisions of the design. If
necessary, make configuration and data layout changes to meet your business
requirements.
v Minimize I/O traffic.
v Balance design features that optimize query performance concurrently with
transaction performance and maintenance operations.
v Improve the performance of administration tasks such as index creation or
backup and recovery processing.
v Reduce the time database administrators spend in regular maintenance tasks.
v Minimize backup and recovery elapsed time.
v Reassess overall database design as business requirements change.
For more information about DB2 product configurations and best practices, see
https://ibm.biz/Bdx2ew.
Ensure that an appropriate amount of disk space is available for your DB2
environment, and allocate memory accordingly.
Chapter 2. Requirements 37
Disk requirements
The disk space that is required for your product depends on the type of
installation you choose and the type of file system you have. The DB2 Setup
wizard provides dynamic size estimates based on the components that are selected
during a typical, compact, or custom installation.
On Linux and UNIX operating systems, 2 GB of free space in the /tmp directory is
recommended, and at least 512 MB of free space in the directory is required.
Note: On Linux and UNIX operating systems, you must install your DB2 product
in an empty directory. If the directory that you have specified as the install path
contains subdirectories or files, your DB2 installation might fail.
Memory requirements
Memory requirements are affected by the size and complexity of your database
system, the extent of database activity, and the number of clients accessing your
system. At a minimum, a DB2 database system requires 256 MB of RAM. For a
system running just a DB2 product and the DB2 GUI tools, a minimum of 512 MB
of RAM is required. However, 1 GB of RAM is recommended for improved
performance. These requirements do not include any additional memory
requirements for other software that is running on your system. For IBM data
server client support, these memory requirements are for a base of five concurrent
client connections. For every additional five client connections, an extra 16 MB of
RAM is required.
For DB2 server products, the self-tuning memory manager (STMM) simplifies the
task of memory configuration by automatically setting values for several memory
configuration parameters. When enabled, the memory tuner dynamically
distributes available memory resources among several memory consumers
including sort, the package cache, the lock list, and buffer pools.
DB2 requires paging, also called swap to be enabled. This configuration is required
to support various functions in DB2, which monitor or depend on knowledge of
swap/paging space utilization. The actual amount of swap/paging space that is
required varies across systems and is not solely based on memory utilization by
application software.
For more information about Prerequisites for a DB2 database server installation (Linux
and UNIX), see http://www-01.ibm.com/support/knowledgecenter/
SSEPGG_10.1.0/com.ibm.db2.luw.qb.server.doc/doc/c0059823.html
The disk space and memory that is allocated by the database manager that is
based on default values of the parameters might be sufficient to meet your needs.
In some situations, however, you might not be able to achieve maximum
performance by using these default values.
Configuration files contain parameters that define values such as the resources
allocated to the DB2 database products and to individual databases, and the
diagnostic level. There are two types of configuration files:
The database manager configuration file for each DB2 instance
The database manager configuration file is created when a DB2 instance is
created. The parameters that it contains affect system resources at the
instance level, independent of any one database that is part of that
instance. Values for many of these parameters can be changed from the
system default values to improve performance or increase capacity,
depending on your system's configuration.
Database manager configuration parameters are stored in a file named
db2systm. This file is created when the instance of the database manager is
created. In Linux and UNIX environments, this file can be found in the
sqllib subdirectory for the instance of the database manager.
The database configuration file for each individual database
A database configuration file is created when a database is created, and is
located where that database exists. There is one configuration file per
database. Its parameters specify, among other things, the amount of
resource to be allocated to that database. Values for many of the
parameters can be changed to improve performance or increase capacity.
Different changes might be required, depending on the type of activity in a
specific database.
All database configuration parameters are stored in a file named
SQLDBCONF. These files cannot be directly edited, and can be changed or
viewed via a supplied API or by a tool, which calls that API.
To change the database manager configuration parameters, use UPDATE
DATABASE MANAGER CONFIGURATION command. Database configuration
parameters can be changed by using UPDATE DATABASE CONFIGURATION
command.
Chapter 2. Requirements 39
Attention: If you edit db2systm, SQLDBCON, or SQLDBCONF by using a
method other than those provided by the database manager, you might
make the database unusable. Do not change these files by using methods
other than those documented and supported by the database manager.
instance_memory
The instance_memory is the database manager instance memory
configuration parameter. This parameter specifies the maximum amount of
memory that can be allocated for a database partition if you are using DB2
database products with memory usage restrictions or if you set it to a
specific value. In Tivoli Netcool Performance Manager 1.4.2, set this
parameter to AUTOMATIC. This setting allows instance memory to grow as
needed.
database_memory
The database_memory is configuration parameter specifies the size of the
database memory set. The database memory size counts towards any
instance memory limit in effect. The setting must be large enough to
accommodate the following configurable memory pools: bufferpools, the
database heap, the locklist, the utility heap, the package cache, the catalog
cache, the shared sort heap, and an additional minimum overflow area of
5%. In Tivoli Netcool Performance Manager 1.4.2, set this parameter to
AUTOMATIC. The initial database memory size is calculated based on the
underlying configuration requirements.
logprimary
Number of primary log files configuration parameter.
This parameter allows you to specify the number of primary log files to be
preallocated. The primary log files establish a fixed amount of storage that
is allocated to the recovery log files.
logsecond
Number of secondary log files configuration parameter.
This parameter specifies the number of secondary log files that are created
and used for recovery log files. The secondary log files are created only as
needed.
Following database configuration parameter values are set in the DB2 database for
IBM Tivoli Netcool Performance Manager 1.4.2.
Log file size (4 KB) (LOGFILSIZ) = 8192
This parameter defines the size of each primary and secondary log file. The
size of these log files limits the number of log records that can be written
to them before they become full and a new log file is required.
v Number of primary log files (LOGPRIMARY) = 50
v Number of secondary log files (LOGSECOND) = -1
For more information about these and other parameters, see http://www-
01.ibm.com/support/knowledgecenter/SSEPGG_10.1.0/
com.ibm.db2.luw.admin.config.doc/doc/c0004555.html?cp=SSEPGG_10.1.0%2F2-2-4
A screen resolution of 1024 x 768 pixels or higher is recommended when you run
the deployer.
Note: The minimum requirements do not account for extra functions such as Tivoli
Netcool/OMNIbus Web GUI, IBM Cognos and MDE, each have extra memory and
CPU impacts.
To support:
v SNMP data only.
v All Tivoli Netcool Performance Manager components that are deployed on a
single server.
v Number of resources that are supported up to 20,000.
v 3 SNMP Technology Packs based on MIB II, Cisco Device, and IPSLA.
v 15-minute polling
v Number of DataView users who are limited to less than three.
To support:
v SNMP data only.
v All Tivoli Netcool Performance Manager components that are deployed on a
single server.
v Number of resources that are supported up to 20,000.
v 3 SNMP Technology Packs based on MIB-II, Cisco Device, and IPSLA.
v 15-minute polling
Chapter 2. Requirements 41
v Number of DataView users who are limited to less than three.
Note: Extra features such as Tivoli Netcool/OMNIbus Web GUI and Mass Data
Extraction (MDE) are not accounted for in this spec. They have extra memory and
CPU impacts.
To support:
v SNMP data only
v All Tivoli Netcool Performance Manager components that are deployed on a
single server
v Number of resources that are supported up to 20,000
v 3 SNMP Technology Packs based on MIB-II, Cisco Device, and IPSLA
v 15-minute polling
v Number of DataView users who are limited to less than three
Screen resolution
Recommended screen resolution details.
A screen resolution of 1024 x 768 pixels or higher is recommended when you run
the deployer.
The following sections list the supported operating systems, modules, and
third-party applications for IBM Tivoli Netcool Performance Manager, Version
1.4.2.
For more information, see the Release notes - IBM Tivoli Netcool Performance Manager,
Version 1.4.2 , which contains the version numbers for each Tivoli Netcool
Performance Manager module in Version 1.4.2.
Kernel Parameters
Solaris 10 uses the resource control facility to implement the System V IPC.
However, Oracle recommends that you set both resource control and /etc/system/
parameters. Operating system parameters that are not replaced by resource
controls continue to affect performance and security on Solaris 10 systems.
Parameter Replaced by Resource Control Minimum Value
noexec_user_stack NA (can be set in /etc/system only) 1
semsys:seminfo_semmni project.max-sem-ids 100
semsys:seminfo_semmsl process.max-sem-nsems 256
shmsys:shminfo_shmmax project.max-shm-memory 4294967295
shmsys:shminfo_shmmni project.max-shm-ids 100
Solaris 10 requirements
Required packages:
Chapter 2. Requirements 43
Before you install the Oracle server, ensure that the following Solaris packages are
installed on your system:
v SUNWarc
v SUNWbtool
v SUNWcsl
v SUNWhea
v SUNWi15cs
v SUNWi1cs
v SUNWi1of
v SUNWlibC
v SUNWlibm
v SUNWlibms
v SUNWsprot
v SUNWtoo
v SUNWxwfnt
2. If these packages are not on your system, see the Solaris Installation Guide for
instructions on installing supplementary package software.
Required patches:
To determine the patch level on your system, enter the following command:
uname -v
Note: All Tivoli Netcool Performance Manager modules are tested to run on an
End-User distribution of Solaris 10.
If you are using Solaris 10 virtualized containers, you must create containers with
"whole root" partitions. Tivoli Netcool Performance Manager does not work in
containers with “sparse root” partitions.
Note: The Tivoli Netcool Performance Manager Self Monitoring Pack and SSM
packs do not report data correctly in virtualized environments, due to
compatibility issues of the underlying SSM agents in these configurations. The
MIB-II pack can also have difficulties discovering resources in virtualized server
environments.
DataMart
DataMart requirements if you are using Solaris 10.
DataLoad
DataLoad requirements if you are using Solaris 10.
DataChannel
DataChannel requirements if you are using Solaris 10.
Technology packs
Technology pack requirements if you are using Solaris 10.
Installation of technology packs on Solaris 10 requires JRE 1.7 (32-bit). The correct
version is installed with the Topology Editor in the following default location:
/opt/IBM/proviso/topologyEditor/jre/bin
Chapter 2. Requirements 45
AIX platforms
Note: Tivoli Netcool Performance Manager on AIX does not support connection to
a Solaris-based Oracle database.
Note: The Tivoli Netcool Performance Manager Self Monitoring Pack and SSM
packs do not report data correctly in virtualized environments, due to
compatibility issues of the underlying SSM agents in these configurations. The
MIB-II pack can also have difficulties discovering resources in virtualized server
environments.
Operating system
This command returns a string that represents the maintenance level for your AIX
system.
For AIX 6.1: If the operating system version is lower than AIX 6.1 Technology
Level 4 SP 1, then upgrade your operating system to this or a later level.
For AIX 7.1: If the operating system version is lower than AIX 7.1 POWER7
Technology Level 0 plus SP 3, or POWER8 Technology Level 3 plus SP 3 then
upgrade your operating system to this or a later level. AIX maintenance packages
are available from the following website: http://www-933.ibm.com/support/
fixcentral/
Files and fixes that are required by Oracle and Tivoli Netcool Performance
Manager for the AIX system.
The following operating system file sets are required for AIX 6.1:
v bos.adt.base
v bos.adt.lib
v bos.adt.libm
v bos.perf.libperfstat 6.1.2.1 or later
v bos.perf.perfstat
v bos.perf.proctools
v xlC.aix61.rte:10.1.0.0 or later
v gpfs.base 3.2.1.8 or later
If you are using the minimum operating system TL level for AIX 6L listed above,
then install all AIX 6L 6.1 Authorized Problem Analysis Reports (APARs) for AIX
6.1 TL 02 SP1, and the following AIX fixes:
v IZ41855
v IZ51456
v IZ52319
Note: See Oracle Metalink Note: 1264074.1 and Note: 1379753.1 for other AIX 6.1
patches that might be required
AIX 7.1
The following operating system file sets are required for AIX 7.1:
v bos.adt.base
v bos.adt.lib
v bos.adt.libm
v bos.perf.libperfstat
v bos.perf.perfstat
v bos.perf.proctools
v xlC.aix61.rte:10.1.0.0 or later
v xlC.rte:10.1.0.0 or later
If you are using the minimum operating system TL level for AIX 7.1 listed above,
then install all AIX 7L 7.1 Authorized Problem Analysis Reports (APARs) for AIX
7.1 TL 0 SP1, and the following AIX fixes:
v IZ87216
v IZ87564
v IZ89165
Chapter 2. Requirements 47
v IZ97035
Note: See Oracle Metalink Note: 1264074.1 and Note: 1379753.1 for other AIX 7.1
patches that might be required.
To determine whether the required file sets are installed and committed, enter a
command similar to the following:
# lslpp -l bos.adt.base bos.adt.lib bos.adt.libm bos.perf.perfstat \
bos.perf.libperfstat bos.perf.proctools
If an APAR is not installed, then download it from the following website and
install it: http://www-933.ibm.com/support/fixcentral/
On AIX systems, the default user process limits are not adequate for Tivoli Netcool
Performance Manager. For detailed information on setting the correct user process
limits, see set Resource Limits in the Installing Tivoli Netcool Performance Manager -
Wireline Component.
DataMart
Java Runtime Environment (JRE) 1.7 or higher (for the Database Information
module).
DataLoad
No special requirements.
DataChannel
Technology packs
Installation of technology packs on AIX 6.1 requires JRE 1.7 (32-bit). The correct
version is installed with the Topology Editor in the following default location:
/opt/IBM/proviso/topologyEditor/jre/bin
Viewing the documentation requires that you can run Adobe Acrobat Reader.
To run on AIX systems, Adobe Acrobat Reader requires GIMP Toolkit (GTK+)
Version 2.2.2 or higher. You can download the toolkit from the following URL:
http://www-03.ibm.com/servers/aix/products/aixos/linux/download.html
In addition, you must install all the dependent packages for GTK+. You can install
GTK+ and its dependent packages either before or after the installation of Acrobat
Reader.
At the time of publication, the latest version of GTK+ is gtk2-2.8.3-9, and the
latest versions of the dependent packages are as follows:
v libpng-1.2.8-5
v libtiff-3.6.1-4
v libjpeg-6b-6
v gettext-0.10.40-6
v glib2-2.8.1-3
v atk-1.10.3-2
v freetype2-2.1.7-5
v xrender-0.8.4-7
v expat-1.95.7-4
v fontconfig-2.2.2-5
v xft-2.1.6-5
v cairo-1.0.2-6
v pango-1.10.0-2
v xcursor-1.0.2-3
v gtk2-2.8.3-9
To fulfill dependency requirements, you must install these Red Hat Package
Managers (RPMs) in the order specified.
Chapter 2. Requirements 49
Installing an RPM:
1. To install an RPM, use the following syntax:
rpm -i rpm_filename
2. To see a list of the RPMs that are installed, enter the following command:
rpm -qa
By default, AIX systems do not have LDAP installed. If the AIX system does not
have LDAP installed and you run Acrobat Reader, a warning message is displayed.
Click OK to have Acrobat Reader proceed normally.
Linux platforms
IBM Tivoli Netcool Performance Manager, Version 1.4.2 can be installed and
operated in an environment that is using VMware partitions. Here are the details
the Linux environment prerequisites for Tivoli Netcool Performance Manager.
Operating system
Supported operating system and kernel versions.
Table 7. Tivoli Netcool Performance Manager supports the following Linux systems.
v Linux hosts running 64-bit Red Hat v Linux hosts running 64-bit Red Hat
Enterprise Linux Version 6.x or 7.2 with Enterprise Linux Version 6.x or 7.2 with
Oracle 12c for fresh install DB2 Version 10.1.0.5 for fresh install
v RHEL 6.x for upgrade option only. option.
v RHEL 6.x for upgrade option only.
To check the version of your operating To check the version of your operating
system, enter: system, enter:
# cat /etc/redhat-release # cat /etc/redhat-release
This command must return the output This command must return the output
similar to: similar to:
Red Hat Enterprise Linux Server Red Hat Enterprise Linux Server
release 6.x release 6.x
To verify the processor type, run the To verify the processor type, run the
following command: following command:
uname -p uname -p
To verify the machine type, run the To verify the machine type, run the
following command: following command:
uname -m uname -m
To verify the hardware platform, run the To verify the hardware platform, run the
following command: following command:
uname -i uname -i
All results should contain the output: All results should contain the output:
x86_64 x86_64
Database
Note: See Note 225710.1 for supported kernels and Note 265262.1 for "Things to
know about Linux." https://support.oracle.com/CSP/main/article?cmd=show
&type=NOT&id=265262.1
The following is a list of packages for Red Hat Enterprise Linux 6.x:
v binutils-2.20.51.0.2-5.11.el6 (x86_64)
v compat-libcap1-1.10-1 (x86_64)
v compat-libstdc++-33-3.2.3-69.el6 (i686) - both architectures are required.
v compat-libstdc++-33-3.2.3-69.el6 (x86_64) - both architectures are required.
v glibc-2.12-1.7.el6 (i686) - both architectures are required.
v glibc-2.12-1.7.el6 (x86_64) - both architectures are required.
v glibc-common-2.5-24 (x86_64)
v ksh-*.el6 (x86_64) - any version of ksh is acceptable.
v libaio-0.3.107-10.el6 (i686) - both architectures are required.
v libaio-0.3.107-10.el6 (x86_64) - both architectures are required.
v libgcc-4.4.4-13.el6 (i686) - both architectures are required.
v libgcc-4.4.4-13.el6 (x86_64)
v libstdc++-4.4.4-13.el6 (i686)
v libstdc++-4.4.4-13.el6 (x86_64)
v libXext-* (i686) - any version.
v libX11-* (i686)
v libxcb-* (i686)
v libXau-* (i686)
Chapter 2. Requirements 51
v make-3.81-19.el6 (x86_64)
v libXtst-1.0.99.2-3.el6.i686
Note: These are minimum required versions. Also, for some architectures both of
the i386 and x86_64 package versions must be verified.
For example, both the i386 and the x86_64 architectures for glibc-2.5-24 must be
installed.
The following packages are required and checked for by the check_os.ini
application:
v libXp-1.0.0-i386
v libXp-1.0.0-x86_64
v libXpm-3.5.5-x86_64
v libstdc++-devel-4.1.1-x86_64
v glibc-devel-2.5-i386
v glibc-devel-2.5-x86_64
v gcc-c++-4.1.2-x86_64
v openmotif-2.3.3-4.el6.i686
The following is a list of packages for Red Hat Enterprise Linux 7.2:
v binutils.x86_64
v compat-libcap1.x86_64
v compat-libstdc++-33.i686
v compat-libstdc++-33.x86_64
v glibc.i686
v glibc.x86_64
v glibc-common.x86_64
v ksh.x86_64
v libaio.i686
v libaio.x86_64
v libaio-devel.i686
v libaio-devel.x86_64
v libgcc.i686
v libgcc.x86_64
v libstdc++.i686
v libstdc++.x86_64
v libXext.i686
v libXext.x86_64
v libX11.i686
v libX11.x86_64
v libxcb.i686
v libxcb.x86_64
v libXau.i686
v libXau.x86_64
v make.x86_64
v libXtst.i686
v libXtst.x86_64
v libXi.i686
v libXi.x86_64
v sysstat.x86_64
v unixODBC.x86_64
v unixODBC-devel.x86_64
Chapter 2. Requirements 53
v zlib-devel.x86_64
v zlib-devel.i686
v elfutils-libelf-devel.x86_64
v glibc-headers.x86_64
v gcc.x86_64
v libXp.i686
v libXp.x86_64
v libXpm.x86_64
v libstdc++-devel.i686
v libstdc++-devel.x86_64
v glibc-devel.i686
v glibc-devel.x86_64
v gcc-c++.x86_64
v gtk2.i686
v cairo.i686
v atk.i686
v vsftpd.x86_64
v xterm.x86_64
v motif.i686
v motif.x86_64
v openssl.x86_64
v openssl098e.i686
v libcanberra.i686
v libcanberra-devel.*
v PackageKit-gtk3-module.i686
v adwaita-gtk2-theme.i686
Note: These are minimum required versions. Also, for some architectures both of
the i686 and x86_64 package versions must be verified.
For example, both the i686 and the x86_64 architectures for glibc-2.5-24 must be
installed.
Required packages for IBM DB2 on Red Hat Enterprise Linux 6.x
The following list shows the package requirements for Red Hat Enterprise Linux
distributions:
v libpam.so.0 (32-bit) is required for DB2 database servers to run 32-bit non-SQL
routines.
v libaio.so.1 is required for DB2 database servers that are using asynchronous
I/O.
v libstdc++.so.6 is required for DB2 database servers and clients.
The following tables list the package requirements for Red Hat Enterprise Linux
distributions for DB2 partitioned database servers.
v The ksh93 Korn shell is required for RHEL5 systems. The pdksh Korn Shell
package is required for all other DB2 database systems.
v A remote shell utility is required for partitioned database systems. DB2 database
systems support the following remote shell utilities:
– rsh
– ssh
By default, DB2 database systems use rsh when you run commands on
remote DB2 nodes, for example, when you start a remote DB2 database
partition. To use the DB2 database system default, the rsh-server package
must be installed (see following table). More information about rsh and ssh is
available in the DB2 Information Center.
If you choose to use the rsh remote shell utility, inetd (or xinetd) must be
installed and running as well. If you choose to use the ssh remote shell utility,
you must set the DB2RSHCMD communication variable immediately after the
DB2 installation is complete. If this registry variable is not set, rsh is used.
v The nfs-utils Network file system support package is required for partitioned
database systems.
All required packages must be installed and configured before you continue with
the DB2 database system setup. For general Linux information, see your Linux
distribution documentation.
Table 9. Package requirements for Red Hat Enterprise Linux
Directory Package name Description
/System Environment/Shell pdksh or ksh93 Korn Shell.
/Applications/Internet openssh This package contains a set
of client programs, which
allow users to run
commands on a remote
computer via a Secure Shell.
This package is not required
if you use the default
configuration of DB2
database systems with rsh.
/System Environment/ openssh-server This package contains a set
Daemons of server programs, which
allow users to run
commands from a remote
computer via a Secure Shell.
This package is not required
if you use the default
configuration of DB2
database systems with rsh.
Chapter 2. Requirements 55
Table 9. Package requirements for Red Hat Enterprise Linux (continued)
Directory Package name Description
/System Environment/ rsh-server This package contains a set
Daemons of programs, which allow
users to run commands on a
remote computer. Required
for partitioned database
environments. This package
is not required if you
configure DB2 database
systems to use ssh.
/System Environment/ nfs-utils Network File System support
Daemons package. It allows access to
local files from remote
computers.
Note: These are minimum required versions. Also, for some architectures both of
the i386 and x86_ 64 package versions must be verified. For example, both the i386
and the x86_64 architectures for glibc-2.5-24 must be installed.
v elfutils-libelf-devel-0.125-3.el5.x86_64.rpm
Requires the following interdependent packages:
– elfutils-libelf-devel
– elfutils-libelf-devel-static
Run the db2prereqcheck command to check if your system meets the prerequisites
for the installation of a specific version of DB2 for Linux, UNIX, and Windows. For
example, run the following commands:
./db2prereqcheck -v 10.1.0.5 -s
DBT3533I The db2prereqcheck utility has confirmed that all installation prerequisites
were met for DB2 database "server " "". Version: "10.1.0.5"
DBT3533I The db2prereqcheck utility has confirmed that all installation prerequisites
were met for DB2 database "server " "with DB2 pureScale feature ". Version: "10.1.0.5"
Required packages for IBM DB2 on Red Hat Enterprise Linux 7.2
The following list shows the package requirements for Red Hat Enterprise Linux
distributions:
v binutils.x86_64
v compat-libstdc++-33.x86_64
v compat-libstdc++-33.i686
v elfutils-libelf.x86_64
v glibc.x86_64
v glibc.i686
v glibc-common.x86_64
v ksh.x86_64
v libaio.x86_64
Chapter 2. Requirements 57
v libaio.i686
v libgcc.i686
v libgcc.x86_64
v libstdc++.i686
v make.x86_64
v elfutils-libelf-devel.x86_64
v glibc-headers.x86_64
v gcc.x86_64
v libXp.i686
v libXp.x86_64
v libXpm.x86_64
v libstdc++-devel.x86_64
v glibc-devel.i686
v glibc-devel.x86_64
v gcc-c++.x86_64
v vsftpd.x86_64
v nfs-utils.x86_64
v pam.x86_64
v motif.i686
v motif.x86_64
v openssl.x86_64
v openssl098e.i686
v libcanberra.i686
v libcanberra-devel.*
v PackageKit-gtk3-module.i686
v adwaita-gtk2-theme.i686
All required packages must be installed and configured before you continue with
the DB2 database system setup. For general Linux information, see your Linux
distribution documentation.
Note: These are minimum required versions. Also, for some architectures both of
the i686 and x86_64 package versions must be verified. For example, both the i686
and the x86_64 architectures for glibc-2.5-24 must be installed.
Run the db2prereqcheck command to check if your system meets the prerequisites
for the installation of a specific version of DB2 for Linux, UNIX, and Windows. For
example, run the following commands:
./db2prereqcheck -v 10.1.0.5 -s
DBT3533I The db2prereqcheck utility has confirmed that all installation prerequisites
were met for DB2 database "server " "". Version: "10.1.0.5"
DBT3533I The db2prereqcheck utility has confirmed that all installation prerequisites
were met for DB2 database "server " "with DB2 pureScale feature ". Version: "10.1.0.5"
DataMart
DataLoad
No special requirements.
DataChannel
No special requirements.
Two specific user names are required on any server hosting Tivoli Netcool
Performance Manager components, they are:
pvuser
A dedicated Tivoli Netcool Performance Manager Unix user.
oracle A dedicated Oracle user.
db2 A dedicated DB2 user.
pvuser
The pvuser user name.
The Tivoli Netcool Performance Manager UNIX user pvuser must be added to each
server that is hosting a Tivoli Netcool Performance Manager component. The Tivoli
Netcool Performance Manager UNIX user, which is referred to as pvuser
throughout the documentation, can be named by using any string as required by
your organizations naming standards.
oracle
The Oracle user is added to each server that is hosting a Tivoli Netcool
Performance Manager component. This user is added during Oracle client or
server installation. The default username that is used is oracle.
However, this Oracle username can be named by using any string as required by
your organizations naming standards.
If you are installing Oracle by using a non-default username, then the non-default
user must be created with Korn shell as its login shell.
Chapter 2. Requirements 59
# useradd -g <group> -G <group_2> -m -d <home_dir>/<username> -k /etc/skel -s /bin/ksh <username>
Where:
v <group> is the name of the primary group to be assigned to the new user.
v <group_2> is the name of the subsequent group to be assigned to the new user.
v <home_dir> is the home directory for the new user.
v <username> is the name of the new user, which can be set to any string.
For example:
useradd -g dba -G oinstall -m -d /export/home/oracle2 -k /etc/skel -s /bin/ksh oracle2
Note: If you choose a non-default Oracle username, you must use the same name
across all instances of Oracle Client and Server throughout your Tivoli Netcool
Performance Manager system.
db2
The DB2 user db2 is added to each server hosting a Tivoli Netcool Performance
Manager component. This user is added when installing either DB2 client or
server. The default username used is db2; however, this DB2 username can be
named using any string as required by your organizations naming standards.
Note: If you want to select a non-default DB2 username, you must use the same
name across all instances of DB2 client and server throughout your Tivoli Netcool
Performance Manager system.
FTP support
Tivoli Netcool Performance Manager requires FTP support.
The FTP (File Transfer Protocol) version that is used to transfer files between Tivoli
Netcool Performance Manager components is delivered with Solaris 10.
AIX also uses FTP to transfer files between Tivoli Netcool Performance Manager
components.
Tivoli Netcool Performance Manager supports the following file transport protocols
between Tivoli Netcool Performance Manager components and third-party
equipment (for example, EMS):
v FTP Solaris 10
v Microsoft Internet Information Services (IIS) FTP server
If you use the SFTP capability, you must obtain, install, generate keys for, maintain,
and support OpenSSH and any packages that are required by OpenSSH.
See Tivoli Netcool Performance Manager Technical Note: DataChannel Secure File Transfer
Installation for more information about installing and configuring OpenSSH.
AIX requirements
The following table lists additional prerequisites that must be installed on an AIX
system, and where these packages can be found:
Table 10. Additional Prerequisites
Package Location
openssl-0.9.7g-1.aix5.1.ppc.rpm https://www14.software.ibm.com/webapp/
iwm/web/preLogin.do?source=aixtbx
openssh-4.1p1_53.tar.Z https://sourceforge.net/projects/openssh-
aix/
bos.adt.libm AIX installation CD.
Solaris requirements
Linux requirements
Chapter 2. Requirements 61
OpenSSH is required for VSFTP to work with Tivoli Netcool Performance Manager.
OpenSSH is installed by default on any RHEL system.
By default, FTP is not enabled on Linux systems. You must enable FTP on your
Linux host to carry out the installation of Tivoli Netcool Performance Manager.
To enable FTP on your Linux host, run the following command as root:
/etc/init.d/vsftpd start
File compression
File compression support.
Archives that are delivered as part of the IBM Tivoli Netcool Performance Manager
distribution are created by using GNU TAR. This program must be used for the
decompression of archives.
IBM Tivoli Netcool Performance Manager supports the use of an external load
balancer to optimize the use of available DataView instances.
Databases
Tivoli Netcool Performance Manager 1.4.2 supports both the databases, Oracle and
IBM DB2.
Oracle support
License recommendations
Oracle defines a Named User in such a way that it includes not only actual human
users, but also non-human-operated devices. In other words, you would require a
Named User Plus license for every resource that Tivoli Netcool Performance
Manager polls, which would be very expensive.
http://www.oracle.com .
Oracle server support
Chapter 2. Requirements 63
Jazz for Service Management
Dashboard Application Services Hub is replacing the Tivoli Integrated Portal.
Java Runtime Environment (JRE) 1.7 (32-bit) is required for all servers hosting
Tivoli Netcool Performance Manager components.
The following browsers are required to support the web client and provide access
to DataView reports:
Note: When you are using Windows Internet Explorer, IBM recommends that you
have available at least 1 GB of memory
Table 12. UNIX Clients
AIX RHEL Solaris 10
Mozilla Firefox 3.6 Mozilla Firefox ESR 24, 31 Mozilla Firefox ESR 24, 31
and 38 and 38
For information about downloading and installing these browsers, see the
following web sites:
v http://www.mozilla.org/
v http://www-03.ibm.com/systems/p/os/aix/browsers/index.html
Screen resolution
Recommended screen resolution details.
A screen resolution of 1152 x 864 pixels or higher is recommended for the display
of DataView reports. Some reports may experience rendering issues at lower
resolutions.
X Emulation
Remote desktop support.
For DataMart GUI access, Tivoli Netcool Performance Manager supports the
following:
v Native X Terminals
v Exceed V 6.x
The following libraries are required for Exceed to work with Eclipse:
v libgtk 2.10.1
v libgib 2.12.1
v libfreetype 2.1.10
v libatk 1.12.1
v libcairo 1.2.6
v libxft 2.1.6
v libpango 1.14.0
v Real VNC server 4.0
Chapter 2. Requirements 65
IBM Tivoli Netcool/OMNIbus Web GUI integration
IBM Tivoli Netcool/OMNIbus Web GUI, Version 8.1 support.
The IBM Tivoli Netcool/OMNIbus Web GUI Integration Guide for Wireline describes
how to integrate IBM Tivoli Netcool/OMNIbus Web GUI and Jazz for Service
Management 1.1.2.1 with the wireline component of Tivoli Netcool Performance
Manager.
Jazz for Service Management 1.1.2.1 and IBM Tivoli Netcool/OMNIbus Web GUI
8.1.
The web browsers supported by IBM Tivoli Netcool/OMNIbus Web GUI and
Tivoli Netcool Performance Manager are listed in the following table.
Table 13. Web clients browsers supported by IBM Tivoli Netcool/OMNIbus Web GUI
Browser Version Operating system
Internet Explorer 11.0 Windows 7
Mozilla Firefox ESR 24
Windows 7
31
38
Red Hat
Enterprise Linux (RHEL) 6.x
or 7.2
Note: When you are using Internet Explorer, IBM recommends that you have at
least 1 GB of memory available.
Overview
Before you begin the Tivoli Netcool Performance Manager installation, you must
install the prerequisite software that is listed in the Requirements chapter.
Oracle
When you complete the steps that are given, the Oracle server and client are
installed and running, with table spaces sized and ready to accept the installation
of a Tivoli Netcool Performance Manager DataMart database. You can
communicate with Oracle by using the SQLPlus command-line utility.
Use IBM-provided installation scripts to install and configure the Oracle database
from the Oracle distribution. For use with Tivoli Netcool Performance Manager,
you must install Oracle as described. Do not use a separate Oracle installation
method that is provided by Oracle Corporation. You must obtain the official Oracle
distribution from your delivery site (after purchase of an Oracle license). See the
Requirements for recommendations when you purchase a license from Oracle.
Note: The Tivoli Netcool Performance Manager script that is used to install Oracle
is platform-independent and can be used to install on Solaris, AIX, or Linux,
regardless of the operating system distribution media.
For a remote server that does not host the primary deployer, you must
download and install the required JRE, and set the correct JRE path. See
Requirements for JRE download details.
Note: See Requirements for the complete list of prerequisite software and their
supported versions.
IBM DB2
For a remote server that does not host the primary deployer, you must
download and install the required JRE, and set the correct JRE path. See
the Configuration Recommendations document for JRE download details.
Note: See the Configuration Recommendations document for the complete list of
prerequisite software and their supported versions.
Supported platforms
The platforms supported by Tivoli Netcool Performance Manager.
For most installations, it does not matter whether you use a Telnet, rlogin, Xterm,
or Terminal window to get to a shell prompt.
Some installation steps must be performed from a window that supports the X
Window server protocols. This means that the steps described in later chapters
must be run from an Xterm window on a remote system or from a terminal
window on the target system's graphical display.
Note: See the Configuration Recommendations document for the list of supported X
emulators.
Command sequences in this manual do not remind you at every stage to set this
variable.
If you use the su command to become different users, be especially vigilant to set
DISPLAY before running X Window System-compliant programs.
Procedure
To make sure the DISPLAY environment variable is set, use the echo command:
$ echo $DISPLAY
Procedure
1. Set the DISPLAY environment variable.
2. Enter the following command when logged in as root:
# /usr/openwin/bin/xhost +
Note: Disabling access control is what enables access to the current machine
from X clients on other machines.
AIX systems
Changing ethernet characteristics on AIX.
Note: If the AIX node is a virtual partition, you must perform these steps on the
virtual I/O server (including the reboot).
Procedure
1. Using the System Management Interface Tool (SMIT), navigate to Devices >
Communication > Ethernet Adapter > Adapter > Change/Show
Characteristics of an Ethernet Adapter.
2. Select your ethernet adapter (the default is ent0).
3. Change the setting Apply change to DATABASE only to yes.
4. Set the port on the switch or router that the AIX node is plugged into to
100_Full_Duplex.
5. Reboot your system.
Solaris systems
This section describes how to set a network interface card (NIC) and a BGE
network driver to full duplex mode.
NIC:
Procedure
1. Determine which type of adapter you have by running the following command:
ifconfig -a
2. To determine the current settings of the NIC, run the command ndd -get
/dev/hme with one of the following parameters:
For example:
ndd -get /dev/hme link_status
In these commands, /dev/hme is your NIC; you might need to substitute your
own /dev/xxx.
3. To set your NIC to 100Mb/s with full duplex for the current session, run the
following commands:
ndd -set /dev/hme adv_100hdx_cap 0
ndd -set /dev/hme adv_100fdx_cap 1
ndd -set /dev/hme adv_autoneg_cap 0
However, these commands change the NIC settings for the current session only.
If you reboot, the settings will be lost. To make the settings permanent, edit the
/etc/system file and add the following entries:
set hme:hme_adv_autoneg_cap=0
set hme:hme_adv_100hdx_cap=0
set hme:hme_adv_100fdx_cap=1
4. Verify that your NIC is functioning as required by rerunning the commands
listed in Step 2.
Procedure
1. To determine the link speed and current duplex setting, run the following
command:
% kstat bge:0 | egrep ’speed|duplex’
The output is similar to the following:
duplex full
ifspeed 100000000
link_duplex 2
link_speed 100
The parameters are as follows:
Parameter Description
Linux systems
Enabling 100 full duplex mode on Linux systems.
Use your primary network interface to enable 100 full duplex mode.
Procedure
1. Enter the following command:
# dmesg | grep -i duplex
This should result in output similar to the following:
eth0: link up, 100Mbps, full-duplex, lpa 0x45E1
2. Confirm the output contains the words:
Full Duplex
If this is not contained within the output, you must enable full duplex mode.
The example output resulting from the command executed in step 1:
eth0: link up, 100Mbps, full-duplex, lpa 0x45E1
indicate that the primary network interface is eth0.
The actions specified in the following process presume that your primary
network interface is eth0.
Procedure
1. Open the file ifcfg-eth0, which is contained in:
/etc/sysconfig/network-scripts/
2. Add the ETHTOOL_OPTS setting by adding the following text:
ETHTOOL_OPTS="speed 100 duplex full autoneg off"
Note: The ETHTOOL_OPTS speed setting can be set to either 100 or 1000
depending on speed of connection available 100Mbit/s or 1000Mbit/s
(1Gbit/s).
The required user can be given any name of your choosing. However, for the
remainder of this document this user is referred to as "pvuser".
Decide in advance where to place the home directory of the pvuser login
username. Use a standard home directory mounted on /home or /export/home, as
available.
Note: Do not place the home directory in the same location as the Tivoli Netcool
Performance Manager program files. That is, do not use /opt/proviso or any other
directory in /opt for the home directory.
Add the pvuser login name to every system on which you install a Tivoli Netcool
Performance Manager component, including the system hosting the Oracle or DB2
server.
These steps add the login name only to the local system files on each computer
(that is, to the local /etc/passwd and /etc/shadow files). If your network uses a
network-wide database of login names such as Yellow Pages or Network
Information Services (NIS or NIS+), see “Adding pvuser on an NIS-managed
network” on page 75.
To add pvuser:
Procedure
1. Log in as root.
2. Set and export the DISPLAY environment variable. (see“Setting up a remote X
Window display” on page 70.)
3. If one does not already exist, create a group to which you can add pvuser. You
can create a group with the name of your choice using the following command:
groupadd <group>
where:
v <group> is the name of the new group, for example, staff.
4. At a shell prompt, run the following command:
# useradd -g <group> -m -d <home_dir>/<username> -k /etc/skel -s /bin/ksh <username>
Where:
v <group> is the name of the group to which you want to add pvuser.
v <home_dir> is the home directory for the new user, for example,
/export/home/ can be used as the example home directory.
v <username> is the name of the new user. This can be set to any string.
Note: For the remainder of this information this user will be referred to as
pvuser.
Attribute Value
login name pvuser
Note: The pvuser account must have write access to the /tmp directory.
When you create the first pvuser login name, log in as pvuser and run the id
command. The system responds with the user name and user ID number (and the
group name and group ID number). For example:
$ id uid=1001(pvuser) gid=10(staff)
When you create the pvuser login name on the next computer, add the -u option to
the useradd command to specify the same user ID number:
# useradd -g <group> -m -d <home_dir>/pvuser -k /etc/skel -s /bin/ksh -u 1001 pvuser
Where:
v <group> is the name of the group to which you want to add pvuser.
v <home_dir> is the home directory for the new user, for example, /export/home/
can be used as the example home directory.
v <username> is the name of the new user. This can be set to any string.
If your site's network uses NIS or NIS+ to manage a distributed set of login names,
see your network administrator to determine whether pvuser must be added to
each Tivoli Netcool Performance Manager computer's local setup files, or to the
network login name database.
If the default user process limits are not adequate for Tivoli Netcool Performance
Manager, do the following.
Procedure
1. Log in as root.
2. Change your working directory to /etc/security by entering the following
command:
# cd /etc/security
3. Make a backup copy of the limits file by entering the following command:
# cp limits limits.ORIG
4. Using a text editor, open the limits file and set the following values:
default: fsize = -1 core = -1 cpu = -1 data = -1 rss = 65536 stack = 65536 nofiles = 2000
totalProcesses = 800
Note: Apply these settings to every AIX system running a Tivoli Netcool
Performance Manager program: the database server, DataLoad servers,
DataChannel servers, and DataMart servers.
5. Write and quit the file.
6. After modifying the settings, log off every Tivoli Netcool Performance Manager
user and then log in again for the changes to take effect.
On Linux systems, it is possible that the default user process limits are not
adequate for Tivoli Netcool Performance Manager. To improve the performance of
the software, you must increase the shell limits for the Oracle user.
Procedure
Note: Use tabs between the fields instead of spaces for the settings to work
effectively.
Important: The default Oracle username is oracle. If you are installing Oracle by
using a non-default Oracle username, use the non-default username in the
/etc/security/limits.conf.
On Solaris systems, it is possible that the default user process limits are not
adequate for Tivoli Netcool Performance Manager. To improve the performance of
the software, you must increase the shell limits for the oracle user.
Procedure
1. Run the following command:
# plimit –n soft_limit,hard_limit $$
# plimit -n 2048,65536 $$
2. To verify that the limits was updated correctly, run the following command:
# prctl -n process.max-file-descriptor $$
Before you install the Oracle server, you must set the Solaris shared memory and
semaphore parameters.
When you install Tivoli Netcool Performance Manager, you specify the size of the
deployment - small, medium, or large. The value you select affects the Oracle
PROCESSES parameter. You must set the appropriate kernel parameter level in
order for the deployment to work properly.
Note: These entries are only for the system running the Oracle server, not the
Oracle client.
Procedure
1. Set the NOEXEC_USER_STACK parameter in the system file:
a. Log in as root.
b. Change to the /etc directory:
# cd /etc
c. Create a backup of the file named system, and open the file with a text
editor.
d. Set the parameter NOEXEC_USER_STACK to 1, by adding the following line at
the bottom of the file:
set NOEXEC_USER_STACK=1
e. Save and exit the system file.
2. Set resource controls correctly.
Procedure
1. Log in as root:
2. Change to the following directory:
# /etc/init.d
3. Run the following command:
# ./vsftpd start
Procedure
1. Log in as root.
2. Open the SELinux config file for editing:
$ cat /etc/selinux/config
3. Change the line in the file.
SELINUX=enforcing
To:
SELINUX=disabled
Note: You can also set the SELINUX setting to permissive. Setting SELINUX to
permissive results in a number of warnings at installation time, but it allows
the installation code to run.
4. To effect the changes, reboot the system with the following command:
$ reboot
The following steps have been taken from the Metalink Note 421308, which is
available from the Oracle website.
Procedure
1. Add the following the lines in the file /etc/sysctl.conf
v kernel.shmall = physical RAM size / pagesize For most systems, this will be
the value 2097152.
See Note 301830.1, which is available from the Oracle website, for more
information.
v kernel.shmmax = 1/2 of physical RAM, but not greater than 4GB.
This would be the value 2147483648 for a system with 4Gb of physical RAM.
v kernel.shmmni = 4096
v kernel.sem = 250 32000 100 128
v fs.file-max = 6815744
v net.ipv4.ip_local_port_range =9000 65500
v net.core.rmem_default = 262144
v net.core.rmem_max = 4194304
v net.core.wmem_default = 262144
v net.core.wmem_max = 1048576
v fs.aio-max-nr = 1048576
2. To effect these changes, execute the command:
# sysctl -p
Procedure
To update kernel parameters on Red Hat and SUSE Linux, follow these steps:
1. Run the ipcs -l command to list the current kernel parameter settings.
2. Analyze the command output to determine whether you have to change kernel
settings or not by comparing the current values with the enforced minimum
settings at http://www-01.ibm.com/support/knowledgecenter/
SSEPGG_10.1.0/com.ibm.db2.luw.qb.server.doc/doc/
c0057140.html?cp=SSEPGG_10.1.0%2F2-0-1-2-2-0-10-1&lang=en. The following
text is an example of the ipcs command output with comments added after //
to show what the parameter names are:
v Beginning with the first section on Shared Memory Limits, the SHMMAX
limit is the maximum size of a shared memory segment on a Linux system.
The SHMALL limit is the maximum allocation of shared memory pages on a
system.
– It is recommended to set the SHMMAX value to be equal to the amount
of physical memory on your system. However, the minimum required on
x86 systems is 268435456 (256 MB) and for 64-bit systems, it is 1073741824
(1 GB).
– The SHMALL parameter is set to 8 GB by default (8388608 KB = 8 GB). If
you have more physical memory than 8 GB, and it is to be used for DB2,
then this parameter increases to approximately 90% of your computer's
physical memory. For instance, if you have a computer system with 16 GB
of memory to be used primarily for DB2, then SHMALL should be set to
3774873 (90% of 16 GB is 14.4 GB; 14.4 GB is then divided by 4 KB, which
is the base page size). The ipcs output converted SHMALL into kilobytes.
The kernel requires this value as a number of pages. If you are upgrading
to DB2 Version 10.1 and you are not using the default SHMALL setting,
you must increase the SHMALL setting by an additional 4 GB. This
increase in memory is required by the fast communication manager (FCM)
for additional buffers or channels.
v The next section covers the amount of semaphores available to the operating
system. The kernel parameter sem consists of four tokens, SEMMSL,
SEMMNS, SEMOPM and SEMMNI. SEMMNS is the result of SEMMSL
multiplied by SEMMNI. The database manager requires that the number of
arrays (SEMMNI) be increased as necessary. Typically, SEMMNI should be
twice the maximum number of agents expected on the system multiplied by
the number of logical partitions on the database server computer plus the
number of local application connections on the database server computer.
v The third section covers messages on the system.
– The MSGMNI parameter affects the number of agents that can be started; the
MSGMAX parameter affects the size of the message that can be sent in a
queue, and the MSGMNB parameter affects the size of the queue.
– The MSGMAX parameter should be changed to 64 KB (that is, 65536 bytes),
and the MSGMNB parameter should be increased to 65536.
3. Modify the kernel parameters that you have to adjust by editing the
/etc/sysctl.conf file. If this file does not exist, create it. The following lines are
examples of what should be placed into the file:
4. Run sysctl with -p parameter to load in sysctl settings from the default file
/etc/sysctl.conf:
sysctl -p
5. Optional: Have the changes persist after every reboot:
v (SUSE Linux) Make boot.sysctl active.
v (Red Hat) The rc.sysinit initialization script reads the /etc/sysctl.conf file
automatically.
Related information:
IBM DB2 10.1 for Linux, UNIX, and Windows documentation on IBM
Knowledge Center
Check the version of tar command you have at present by entering the following
command:
#tar --version
For a GNU tar utility the output would conform to the following:
tar (GNU tar) 1.14
Copyright (C) 2004 Free Software Foundation, Inc.
This program comes with NO WARRANTY, to the extent permitted by law.
You may redistribute it under the terms of the GNU General Public License;
see the file named COPYING for details.
Written by John Gilmore and Jay Fenlason.
If the resulting output does not indicate a GNU tar utility then perform the
following steps.
Procedure
1. Find the native tar location.
#which tar
/usr/bin/tar
2. Move the native binary tar command:
#cd /usr/bin
#mv tar tar_
3. Install the GNU tar, which can be obtained from the toolbox site:
For example, refer the following toolbox site for AIX http://www-03.ibm.com/
systems/power/software/aix/linux/toolbox/download.html
Set the install location to be something similar to /opt/freeware/bin/tar
4. Create a gnu tar soft link:
Installing libcrypto.so
For starting up DataLoad process correctly upon installation and for full SNMPv3
support, SNMP DataLoad must have access to the libcrypto.so.
Note: If you are running on a Linux platform, please start with step
3 onwards. The libcrypto.so file is delivered as standard on Linux platforms,
hence step 1 and 2 are not required.
For each new and existing SNMP DataLoad, you must perform the following steps.
Procedure
1. Install the OpenSSL package. This package can be downloaded from
http://www.openssl.org/.
2. As root, extract and install the libcrypto.so file by using the following code:
# cd /usr/lib
# ar -xv ./libcrypto.a
# ln -s libcrypto.so.0.9.8 libcrypto.so
3. Remove the existing link for libcrypto.so if any by using the following code:
# cd /usr/lib
# rm -rf libcrypto.so
4. Create a link for libcrypto.so which links to libcrypto.so.0.9.8e using the
following code:
# cd /usr/lib
# ln -s libcrypto.so.0.9.8e libcrypto.so
5. Update the dataload.env file so that the LD_LIBRARY_PATH (on Solaris & Linux)
or LIBPATH (on AIX) environment variables include the path:
/usr/lib
What to do next
Check the variable is set by doing the following steps:
1. Open a fresh shell.
2. Verify that the environment variables are set correctly in dataload.env file.
3. Source the dataload.env file.
4. Bounce the SNMP DataLoad process.
Upon DataLoad process startup with a valid library, the collector logs the
following log messages:
INFO:CRYPTOLIB_LOADED Library ’libcrypto.so’ (OpenSSL 0.9.8e-fips-rhel5 01 Jul 2008, 0x90802f)
has been loaded.
INFO:SNMPV3_SUPPORT_OK Full SNMPv3 support Auth(None,MD5,SHA-1) x Priv(None,DES,AES) is available.
The Deployer will check the for the items described under the following headings.
You should ensure that all elements are installed before running the deployer.
The Deployer performs a check on the operating system versions and that the
minimum required packages are installed.
For more information on the complete set of requirements for installation on Linux,
AIX and Solaris, consult the Requirements chapter.
Note: pvuser is the required Tivoli Netcool Performance Manager Unix user.
Adding this user to your system is described in “Adding the pvuser login name”
on page 74.
If you install the product from an electronic image, be sure to keep a copy of the
distribution image in a well-known directory because you will need this image in
the future to make changes to the environment, including uninstalling Tivoli
Netcool Performance Manager.
Whether you are installing the product from an electronic image, you must copy
the distribution to a writeable location on the local filesystem before beginning the
installation.
Procedure
1. On the target host, log in as the Tivoli Netcool Performance Manager user, such
as pvuser.
2. Create a directory to hold the contents of your Tivoli Netcool Performance
Manager distribution. For example:
$ mkdir /var/tmp/cdproviso
Note: Any further references to this directory within the install are made by
using the token <DIST_DIR>.
You can run a variety of scripts and programs from directories residing in the
directory created on the hard drive, including:
v Oracle server configuration script
v Pre-installation script
v Installation script
v Tivoli Netcool Performance Manager setup program
3. Download the Tivoli Netcool Performance Manager distribution to the host
directory created in the previous step and expand the contents of the
distribution package.
Note: For a basic overview of the minimum CPU speed, memory size, and disk
configuration requirements for your Tivoli Netcool Performance Manager
installation, see the Requirements. For more detailed information you can
contact IBM Professional Services.
v The current version of Tivoli Netcool Performance Manager software.
v The downloaded files for the database installation.
v If you are installing database on an AIX system, follow the instructions in
“Asynchronous I/O Support” before installing database.
Before installing Oracle on AIX 5.3 systems, you must set up asynchronous I/O
(AIO), or the installation might fail.
Note: Asynchronous I/O (AIO) is not required for AIX 6.1 or later.
# smit chaio
v Reboot the system.
v Re-enter the following command to check the current status of AIO support:
Procedure
Provide a basename, which the installation retains as the variable
DB_USER_ROOT.
Note: This is not an operating system environment variable, but a variable used
internally by the installer.
The default DB_USER_ROOT value is PV. IBM recommends you to retain the default
value.
Results
In addition, separate login names are generated for each Tivoli Netcool
Performance Manager DataChannel and subsystem, identified by an appended
channel number, as in the following examples:
v PV_CHANNEL_01
v PV_CHANNEL_02
v PV_LDR_01
v PV_LDR_02
Procedure
You can retain the default password, or enter passwords of your own according to
your site password standards.
You should use the same password for all Tivoli Netcool Performance Manager
subsystem login names. If you use different passwords for each login name, keep a
record of the passwords you assign to each login name.
Important: Avoid using any special characters in any password field, (for
example, @, \, /) while installing Tivoli Netcool Performance Manager with DB2.
Results
The Tivoli Netcool Performance Manager installer uses PV for three default values,
as described in Table 5.
What to do next
Note: If you use a non-default value, you must remember to use the same value in
all installation stages. For example, in Oracle, if you set your TNS name to PROV
instead of PV, you must override the default PV entry in all subsequent steps that
call for the TNS name.
server
Server program files installed in
v -/opt/oracle
v -/opt/db2
v /opt/oracle
v _BASE
v /opt/db2
Note: DB2_BASE directory should be
v _BASE
same as home directory of DB2 instance
user.
Important: If you are using non-default
path for DB2 installation directory, you
must modify the DB2_BASE parameter in
Topology Editor also.
Operating system login name for: v oracle
v db2
v user
Note: The default name created is oracle
v user and db2 for Oracle and IBM DB2 databases
respectively. However, you can set another
name for the DB2 or Oracle user.
Password for: v oracle
v db2
v user
Important: Avoid using any special
characters in any password field, (for
v user
example, @, \, /) while installing Tivoli
Netcool Performance Manager with DB2.
PV
v _NAME
v _NAME
TNS name for Tivoli Netcool Performance PV
Manager database instance
v /opt/oracle/product/ 12.1.0
v installed in
v /opt/db2/product/10.1.0
ORACLE_HOME
Note: The value of ORACLE_HOME or
v installed in DATABASE_HOME cannot contain soft links
DATABASE_HOME to other directories or file systems. Be sure
to specify the entire absolute path to Oracle
or DB2.Tivoli Netcool Performance Manager
expects an Optimal Flexible Architecture
(OFA) structure where DATABASE_HOME
or ORACLE_HOME is a sub-directory to
DB2_BASE or ORACLE_BASE.
Important: If you are using non-default
path for DB2 installation directory, you must
modify DB2_BASE parameter in Topology
Editor also.
DB_USER_ROOT
v /raid_2/oradata
Path for Oracle data, mount point 1 v /raid_2/db2data
v /raid_3/oradata
Path for Oracle data, mount point 2 v /raid_3/db2data
Note: If your site has established naming or password conventions, you can
substitute site-specific values for these settings. However, IBM strongly
recommends using the default values the first time you install Tivoli Netcool
Performance Manager. See “Specifying a basename for DB_USER_ROOT” on page
89 for more information.
You can use the DB2 Setup wizard to create the users and groups during the
installation process. If you want, you can create them ahead of time.
To perform this task, you must have root user authority to create users and
groups.
The user and group names that are used in the following instructions are
documented in the following table. You can specify your own user and group
names if they adhere to system naming rules and DB2 naming rules.
The user IDs you create are required to complete subsequent setup tasks.
User ID restrictions
Procedure
To create the required groups and user IDs for DB2 database systems, follow these
steps:
1. Log in as root user.
Chapter 3. Installing and configuring the prerequisite software 93
2. To create groups on Linux operating systems, enter the following commands:
Note: These command line examples do not contain passwords. You can use
the passwd username command from the command line to set the password.
groupadd db2iadm
groupadd db2fadm
3. Create users for each group by using the following commands:
useradd -g db2iadm -m -d /opt/db2 db2
useradd -g db2fadm -m -d /home/db2fenc db2fenc
4. Set initial password by using the following commands:
passwd db2
Changing password for user db2.
New UNIX password: db2
BAD PASSWORD: it is WAY too short
Retype new UNIX password: db2
passwd: all authentication tokens updated successfully.
passwd db2fenc
Changing password for user db2fenc.
New UNIX password: db2fenc
BAD PASSWORD: it is based on a dictionary
Retype new UNIX password: db2fenc
passwd: all authentication tokens updated successfully.
5. Relax the permission on home directory of db2 user as you want to install DB2
inside it by using the following command:
chmod 707 /opt/db2
Creating group and user IDs for Data Server Client installation
Procedure
To create the required groups and user IDs for Data Server Client installation,
follow these steps:
1. Log in as a user with root user authority.
2. To create groups on Linux operating systems, enter the following commands:
Note: These command line examples do not contain passwords. They are
examples only. You can use the passwd username command from the
command line to set the password.
groupadd db2iadm
3. Create users for each group by using the following commands:
useradd -g db2iadm -m -d /opt/db2 db2
4. Set initial password by using the following commands:
passwd db2
Changing password for user db2.
New UNIX password: db2
These factors determine your installation scenario that you use to install Jazz for
Service Management. You can also use the decision maps. See Installation decision
maps.
Important:
v Do not install Jazz for Service Management 1.1.2.1 on a Solaris machine in a
distributed or a stand-alone environment.
v For minimal installation of Tivoli Netcool Performance Manager, use only
smadmin as the administration user name and smadmin1 as the administration
password for Jazz for Service Management. The default administration user
name is smadmin.
v To start or stop the server, you must log in with the same user that you used to
install Jazz for Service Management. If Jazz for Service Management is installed
with non-root user, never restart Jazz for Service Management application
servers with root user, or else the Dashboard Application Services Hub becomes
unusable.
Related information:
Important: You must add the following list of Linux libraries, which are
prerequisite when you install Jazz for Service Management. If you do not include
these libraries, it can cause failure of prerequisite scanner or the installation of Jazz
for Service Management.
Procedure
1. Planning and installing Jazz for Service Management
Task Description
Plan your installation You can perform a full or custom installation
of Jazz for Service Management based on
the integration services to install, business
and security policies, your target
environments, and user types.
Related information:
Before you install Jazz for Service Management, refer to the technical notes for Jazz
for Service Management. The technical notes provide information about
late-breaking issues, limitations, and workarounds.
v Technotes documenting issues in Jazz for Service Management Version 1.1.2.1
v Technotes documenting issues in Jazz for Service Management Version 1.1.2.0
Note: Some antivirus software can interfere with the Jazz for Service Management
installation process. Before you install Jazz for Service Management, ensure that
you disable the antivirus software and restart the target machine.
If your target system does not have a browser that is installed to support
installation by using the launchpad, use the Installation Manager GUI.
Procedure
1. Ensure that your target environment meets the hardware and software
requirements for Jazz for Service Management and its integration services.
See Hardware and software requirements.
2. Set up a local file system.
See Setting up a local file system for a full installation.
3. If you want to use an alternative temporary directory for installing the
integrations services, see Specifying an alternative temporary directory for
installation.
4. If you do not intend to install Jazz for Service Management to the default
installation directory or use the default temporary directory, you must modify
the Prerequisite Scanner configuration files to check whether these non-default
directories have the available disk space. See Editing default configuration files
for non-default installation locations.
5. Perform the full installation.
See “Performing a fresh installation” on page 106.
Related information:
Note: Some antivirus software can interfere with the Jazz for Service Management
installation process. Before you install Jazz for Service Management, ensure that
you disable the antivirus software and restart the target machine.
If your target system does not have a browser that is installed to support
installation by using the launchpad, use the Installation Manager GUI.
Procedure
1. Choose your preferred topology.
See Custom installation scenario.
2. Ensure that your target environment meets the hardware and software
requirements for Jazz for Service Management and its integration services.
See Hardware and software requirements.
3. Set up a local file system.
See Setting up a local file system for a custom installation.
4. If you want to use an alternative temporary directory for installing the
integrations services, see Specifying an alternative temporary directory for
installation.
5. If you do not want to install Jazz for Service Management and its supporting
middleware to the default installation locations, you must modify the
Prerequisite Scanner configuration files to check whether these non-default
locations have the available disk space.
See Editing default configuration files for non-default installation locations.
6. On each target Jazz for Service Management server, run Prerequisite Scanner to
scan and verify that your target environment meets the hardware and software
requirements for Jazz for Service Management.
See Running Prerequisite Scanner by using convenience scripts.
Note: Jazz for Service Management Version 1.1.2.1 is a full refresh of Jazz for
Service Management Version 1.1 Base with Modification 2, Fix Pack 1.
You can download Jazz for Service Management, Version 1.1.2.1 from IBM Fix
Central.
Related information:
Attention: Ensure that the package that you have downloaded is the latest
refresh of Jazz for Service Management, Version 1.1.2.1.
1. Check that your environment meets the current requirements by running IBM
Prerequisite Scanner. Specify the update parameter when you run the relevant
Prerequisite Scanner convenience script.
See Running Prerequisite Scanner by running the convenience scripts
Chapter 4. Installing Jazz for Service Management 103
See Running Prerequisite Scanner manually
2. Before you update Jazz for Service Management, see the late-breaking issues,
limitations, and workarounds from here:
http://www.ibm.com/support/search.wss?q=jazzsm1121relnotes
3. Update your existing Installation Manager to Version 1.8.1 or later, before you
update Jazz for Service Management.
4. Tivoli Common Reporting 3.1.2.1 supports rolling back and upgrading, but if a
rollback fails in its early stages, then you need to manually roll it back.
See Restoring Cognos if Tivoli Common Reporting update fails on multiple
platforms.
Back up your jazzSM_Home/reporting/cognos directory before you begin the
update.
Note: You must update the installed integration services in the same application
server profile to the same fix pack level.
You must update the installed integration services in the same application server
profile to the same fix pack level.
./IBMIM
4. Set up the fix pack repository preference.
a. Select File > Preferences.
b. In the Preferences > Repositories pane, click Add Repository.
c. Click Browse and browse to the following location of the file:
<JazzSM_FP_Home>/1.1.2-TIV-JazzSM-multi-FP001/JazzSMFPRepository/
disk1/diskTag.inf
d. Click Apply.
e. Click OK.
f. Click OK to close the Repositories pane.
5. On the Installation Manager home page, click Update. The Update Packages
window opens.
6. Select the Jazz for Service Management software package group in which the
integration services are installed, and then click Next.
7. Select each check box that is associated with each installed component that
you want to update, and click Next.
The Licenses pane opens.
8. Review the license agreement for the software packages, and accept the terms,
and click Next.
The Features pane opens.
9. Select the features that you want to update, and click Next.
10. In the Common Configurations tab, enter password for smadmin user.
11. Click Validate.
12. After the validation completes successfully, click Next.
13. Continue with the installation and specify the configuration details for the
integration service that you want to update.
For more information, see Integration services installation overview.
14. In the Summary pane, review the software packages that you want to install
and click Update. After Installation Manager updates the fix pack, it displays
a message.
15. Click Finish.
What to do next
v Apply the 1.1.2.1-TIV-JazzSM-DASH-Cumulative-Patch-0005 interim fix of Jazz
for Service Management.
Procedure
1. Download and extract 1.1.2-TIV-JazzSM-multi-FP001.zip file to a different
local directory from Fix Central.
For example, <JazzSM_FP_Home>
Restriction: Ensure that the path to the <JazzSM_FP_Home> directory does not
contain any spaces or special characters, otherwise the launchpad does not
start.
Important:
v It is recommended that you have only one instance of the launchpad open
at a time.
v If DB2, Tivoli Common Reporting, or WebSphere Application Server
repository is available on a shared network drive, ensure that you run the
launchpad from the local file system to access the repository on the shared
drive, and install Jazz for Service Management.
4. Click Full.
The Full Installation window opens.
5. Review the instructions in the Full Installation window, and click Next. The
Full Installation > Software License Agreement window opens.
CAUTION:
You can continue with the full installation without taking
appropriate action, but it might fail or install with issues.
Pass
If the target environment meets all prerequisite checks, Prerequisite
Scanner returns an overall PASS result for the environment.
If Prerequisite Scanner returns this result, you can install Jazz for
Service Management.
8. Click Next.
The Basic Settings window opens.
9. Verify, use, or enter the default values as needed:
Option Description
User name The administration user ID for the database,
application servers, and Jazz for Service
Management integration services. The
default value is smadmin.
Restriction: On Linux systems only: The
length of the user ID must be a maximum of
8 characters; otherwise, the installation
program cannot create the DB2 instance.
Password and Confirm password The password that is associated with any
users created by the full installation. The
password must have a minimum of 8
alphanumeric characters and must not
contain special characters or space.
Local host name The fully qualified name or IP address of the
local server on which you install the
software. The default value is the fully
qualified host name that the launchpad
retrieves from the local server. If it is not a
valid value, you can change the value.
Results
The generic and offering specific log files that are generated during the full
installation are saved in the following locations:
What to do next
v Apply the 1.1.2.1-TIV-JazzSM-DASH-Cumulative-Patch-0005 interim fix of Jazz
for Service Management.
v Verify the installation of the integration services.
Related tasks:
“Performing an upgrade installation” on page 104
Use this installation procedure if you already have an earlier version of Jazz for
Service Management on your system. For example, 1.1.1.0.
Related information:
Procedure
Use IBM Installation Manager in GUI or silent modes to uninstall Jazz for Service
Management Version 1.1.2.1.
See Uninstalling fix packs by using Installation Manager GUI mode.
See Uninstalling fix packs by using Installation Manager silent mode.
Important: When you revert to the previous version of Jazz for Service
Management, Installation Manager does not automatically account for interim
fixes. You must manually install interim fixes after you roll back.
Related information:
Upgrading Java
Ensure that you configure the IBM WebSphere Application Server to run with Java
7 after successfully installing Jazz for Service Management 1.1.2.1
Important: The IBM WebSphere Application Server must be running with Java 7
before you proceed to upgrade Tivoli Netcool Performance Manager DataView
component.
Procedure
1. On the relevant Jazz for Service Management server, open a command window.
2. Run the following command to configure IBMWebSphere Application Server to
run with Java 7:
/opt/IBM/WebSphere/AppServer/bin/managesdk.sh -enableProfile -profileName
JazzSMProfile -sdkname 1.7_64 -enableServers
You must restart Jazz for Service Management 1.1.2.1 application server after you
complete the Java upgrade step.
Related information:
Instructions on how to install the Oracle and DB2 database for Tivoli Netcool
Performance Manager.
Note: You need to regularly purge the metric data either by manually purging or
by using cron job. For more information, see Purging metric data topic under
Administering Database guide.
To install Oracle 12.1.0.2 server (64-bit), you can use the scripts provided as part of
Tivoli Netcool Performance Manager.
Note: Oracle 12.1.0.2 client (32-bit) must be installed into a new ORACLE_HOME.
Procedure
1. Log in as root.
2. Set the DISPLAY environment variable.
3. Create a directory to hold the contents of the Oracle distribution. For example:
# mkdir /var/tmp/oracle12102
4. Download the Oracle files to the /var/tmp/oracle12102 directory.
5. Extract the Oracle distribution files that now exist in the /var/tmp/oracle12102
directory.
Results
The directory created and to which the Oracle 12.1.0.2 distribution is downloaded
from now on be referred to by using <ORA_DIST_DIR>.
What to do next
Before you proceed to the next step, make sure that you obtain the upgrade
instructions provided by Oracle. The instructions contain information on
performing steps that are required for the upgrade that are not documented in this
guide.
Procedure
1. Make sure all the required Solaris packages and patches are installed on your
system. All required packages and patches are specified in the Configuration
Recommendations.
2. If these packages are not on your system, see the relevant operating system
Installation Guide for instructions on installing supplementary package
software.
Note: You must use the same Oracle username across all instances of Oracle
Client and Server throughout your Tivoli Netcool Performance Manager system.
v Creates the Oracle directory structure
v Creates startup and shutdown scripts for Oracle server processes
Note: If the oracle user is not created, the script creates this user for you, and
ORACLE_BASE is set as the user home directory. If you would prefer to use a
different home directory for the oracle user, create the oracle user before you run
the script. The script does not create an oracle user if one exists.
Note: It is likely that you have create the dba and install groups and the oracle
user. However, this script must still be run to create the required Oracle directory
structure.
Procedure
1. As root, set the ORACLE_BASE and ORACLE_HOME environment variables.
For example:
# export ORACLE_BASE=/opt/oracle/
# export ORACLE_HOME=/opt/oracle/product/12.1.0
Note: The script places this variable into the oracle login account's .profile
file.
To check that the variable is set correctly, enter the following command:
# env | grep ORA
2. Change to the following directory:
systems:
# cd <DIST_DIR>/proviso/SOLARIS/DataBase/SOL10/oracle/instance
systems:
# cd <DIST_DIR>/proviso/AIX/DataBase/AIX<version_num>/oracle/instance
systems:
# cd <DIST_DIR>/proviso/RHEL/DataBase/RHEL<version_num>/oracle/instance
where:
<DIST_DIR> is the directory on the hard disk drive where you copied the
contents of the Tivoli Netcool Performance Manager distribution in “Download
the Oracle distribution to disk” on page 127.
3. Run the Oracle configuration script by entering the following command:
# ./configure_ora
Menu :
1. Modify Oracle software owner.
2. Next supported release.
3. Check environment.
0. Exit
Choice :
Note: You must set a password for the oracle login name.
The following example shows the directory structure that is created for Oracle,
where ORACLE_BASE was set to /opt/oracle:
/opt/oracle/product
12.1.0.2
12.1.0.2/dbs
/opt/oracle/admin
/opt/oracle/admin/skeleton
/opt/oracle/admin/skeleton/bin
/opt/oracle/local
Specific files:
v /etc/init.d/dbora, which starts the Oracle Listener and database server
automatically on each system restart
v Symbolic links to /etc/init.d/dbora in /etc/rc0.d, /etc/rc1.d, and /etc/rc2.d
v Oracle configuration files /var/opt/oracle/oratab and lsnrtab.
Specific files:
v /etc/inittab is modified to contain the dbstart and lsnrctlstartup calls.
v /etc/rc.shutdown is modified to contain the dbshut and lsnrctl stop
commands.
v Oracle configuration files /etc/oratab and /etc/lsnrtab.
Common files:
Note:
v The value of ORACLE_HOME cannot contain soft links to other directories or
file systems. Be sure to specify the entire absolute path to Oracle.
v You must add the ORACLE_SID variable to this file later, in “Set the
ORACLE_SID variable” on page 118.
To set a password:
Procedure
1. Log in as root.
2. Enter the following command:
# passwd oracle
3. Enter and reenter the password (oracle, by default) as prompted. The
password is set.
Procedure
1. As root, change directory using the following command:
systems:
# cd <DIST_DIR>/proviso/SOLARIS/DataBase/SOL10/oracle/instance/ora_installer
systems:
# cd <DIST_DIR>/proviso/AIX/DataBase/AIX<version_num>/oracle/instance/ora_installer
systems:
Note: You must use the same ORACLE_BASE setting that you specified in
“Run the Oracle server configuration script”.
3. Enter the following command:
# ./pre_install_as_root
The following messages indicate success:
Checking that you are logged in as root --> Ok.
Checking ORACLE_BASE --> Ok.
Checking oraInst.loc file --> Ok.
If the script shows an error, correct the situation causing the error before
proceeding to the next step.
Procedure
1. Log in as root or as a superuser.
2. Set the DISPLAY environment variable.
3. Change to the directory <ORA_DIST_DIR>/database.
Note: For more information on this Oracle error, see Oracle Metalink Article
282036.1.
Procedure
1. Log in as oracle. Set and export the DISPLAY environment variable.
If you are using the su command to become oracle, use a hyphen as the second
argument so the oracle user login environment is loaded:
# su - oracle
2. Verify that the environment variable ORACLE_BASE has been set by entering
the following command:
$ env | grep ORA
If the response does not include ORACLE_BASE=/opt/oracle, stop and make sure
the .profile file was set for the oracle user, as described in “Run the Oracle
client configuration script” on page 127.
3. To verify the path, enter the following command:
$ echo $PATH
The output must show that /usr/ccs/bin is part of the search path. For
example:
/usr/bin:/opt/oracle/product/12.1.0/bin:/usr/ccs/bin
Procedure
1. Log in as oracle.
2. Open the .profile file with a text editor.
3. Add the following line anywhere between the Begin and End Oracle Settings
comment lines:
ORACLE_SID=PV; export ORACLE_SID
Note: The Oracle installation script that is provided by IBM is used to install
Oracle server, Oracle client, and upgrade patches to an existing Oracle server or
Client installation.
Procedure
1. As Oracle, change directory by using the following command:
systems:
# cd <DIST_DIR>/proviso/SOLARIS/DataBase/SOL10/oracle/instance/ora_installer
systems:
systems:
# cd <DIST_DIR>/proviso/RHEL/DataBase/RHEL<version_num>/oracle/instance/ora_installer
where:
<DIST_DIR> is the directory on the hard disk drive where you copied the
contents of the Tivoli Netcool Performance Manager distribution in
“Download the Oracle distribution to disk” on page 127.
2. Enter the following command to start the installer:
$ ./perform_oracle_inst
The installation menu is displayed:
--------------------------------------------------
perform_oracle_inst
Installation of oracle binaries
<Current Date>
--------------------------------------------------
OS ........... : [ Linux 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 ]
Host ......... : [ tnpmuslnx0110.myhost.example.com ]
Logname ...... : [ oracle ]
Install Oracle release .... : [ 12.1.0 ]
Installation type.......... : [ Server ]
Enter the appropriate letter to modify the entries below:
a) ORACLE_BASE .. : [ /opt/oracle ]
b) ORACLE_HOME .. : [ /opt/oracle/product/12.1.0 ]
c) DBA group ..................... : [ dba ]
d) OUI Inventory group ........... : [ oinstall ]
e) Oracle Software owner ......... : [ oracle ]
f) Directory where CDs were copied:
[ <ORA_DIST_DIR> ]
Menu :
1. Next supported release
2. Set install type to: Client
3. Perform install
0. Exit
Choice :
Note: You can safely ignore any font.properties not found messages in the
output.
When the installation reaches the In Summary Page stage, the installation
slows down significantly while Oracle files are copied and linked.
13. The following message is displayed when the installation completes:
In End of Installation Page
The installation of Oracle12c Database was successful.
Check /opt/oracle/oraInventory/logs/silentInstall2011-09-28_04-23-53PM.log
for more details.
The Oracle installation has completed. Check the
messages above to determine if the install completed
successfully. If you do not see successful completion
messages, consult the install log at:
/opt/oracle/oraInventory/logs
Press C to continue...
Note: For any installation error, write down the log file location to aid in
troubleshooting.
14. Type C and press Enter to return to the installation menu.
15. Type 0 and press Enter to exit the installation menu.
What to do next
This step is also required after an Oracle patch installation. See “Verify the required
operating system packages” on page 112.
Procedure
1. Log in as root or become superuser. Set the DISPLAY environment variable.
2. Change to the directory where Oracle server files were installed. (This is the
directory as set in the ORACLE_HOME environment variable.) For example:
# cd /opt/oracle/product/12.1.0
3. Run the following command:
./root.sh
Messages like the following are displayed:
File contents:
Performing root user operation for Oracle 12c
Procedure
1. Log in as oracle.
2. Depending on your operating system, change to the following directory:
systems:
$ cd /var/opt/oracle
systems:
$ cd /etc
systems:
$ cd /etc
3. Edit the oratab file with a text editor. The last line of this file looks like this
example:
*:/opt/oracle/product/12.1.0:N
4. Make the following edits to this line:
v Replace * with $ ORACLE_SID (PV by default).
v Replace N with Y.
The last line must now be:
PV:/opt/oracle/product/12.1.0:Y
5. Save and close the oratab file.
Note: Instead of creating the listener.ora file manually, as described in the steps
that follow, you can create it by running the Oracle Net Configuration Assistant
utility. See the Oracle Corporation documentation for information about Net
Configuration Assistant.
The Oracle Listener process manages database connection requests from Oracle
clients to an Oracle server.
Procedure
1. Log in as oracle.
2. Change to one of the following directories:
$ cd $TNS_ADMIN
or
$ cd /opt/oracle/product/12.1.0/network/admin
3. Copy the sample listener.ora contained in the /opt/oracle/admin/skeleton/
bin directory:
$ cp /opt/oracle/admin/skeleton/bin/template.example_tnpm.listener.ora listener.ora
Note: By Oracle convention, the keywords in this file are in uppercase but
uppercase is not required.
# listener.ora network configuration file in directory
# /opt/oracle/product/12.1.0/network/admin
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP) (HOST = {HOST}) (PORT = 1521))
)
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = IPC) (KEY = EXTPROC))
)
)
)
SID_LIST_LISTENER =
(SID_LIST =
(SID_DESC =
(SID_NAME = PLSExtProc)
(ORACLE_HOME = /opt/oracle/product/12.1.0)
(PROGRAM = extproc)
)
(SID_DESC =
(GLOBAL_DBNAME = PV.WORLD)
(SID_NAME = PV)
(ORACLE_HOME = /opt/oracle/product/12.1.0)
)
)
4. Using a text editor, change the following:
a. Replace the string yourhost in the line (HOST = yourhost) with the name of
your Oracle server.
systems:
$ cd /etc
systems:
$ cd /etc
6. Edit the lsnrtab file and add a line in the following format to the end of the
file (after the initial comments):
LISTENER:value_of_ORACLE-HOME:Y
For example:
LISTENER:/opt/oracle/product/12.1.0:Y
In this syntax, LISTENER is the name of the listener process.
7. Write and quit the file.
8. Test that the listener process works correctly by starting it manually using the
following command:
lsnrctl start
(The lsnrctl command also accepts the stop and status arguments.) Look for
a successful completion message.
Procedure
1. Log in as oracle.
2. Change to one of the following directories:
$ cd $TNS_ADMIN
or
$ cd /opt/oracle/product/12.1.0/network/admin
3. Create the sqlnet.ora file, which will manage Oracle network operations. You
must create an sqlnet.ora file for both Oracle server and Oracle client
installations. Follow these steps:
a. Copy the sample sqlnet.ora file, template.example_tnpm.sqlnet.ora,
contained in the opt/oracle/admin/skeleton/bin/ directory:
$ cp /opt/oracle/admin/skeleton/bin/template.example_tnpm.sqlnet.ora sqlnet.ora
b. Add the following lines to this file:
lsnrctl stop
lsnrctl start
Look for a successful completion message.
4. Create the tnsnames.ora file, which maintains the relationships between logical
node names and physical locations of Oracle Servers in the network. You can
do this by copying the existing sample file:
cp /opt/oracle/admin/skeleton/bin/template.example_tnpm.tnsnames.ora tnsnames.ora
Follow these steps:
a. Enter lines similar to the following example, using the actual name of your
Oracle server in the HOST=delphi line and replacing {SID} with PV or your
Oracle SID.
# tnsnames.ora network configuration file in
# /opt/oracle/product/12.1.0/network/admin
#
# The EXTPROC entry only needs to exist in the
# tnsnames.ora file on the Oracle server.
# For Oracle client installations, tnsnames.ora
# only needs the PV.WORLD entry.
EXTPROC_CONNECTION_DATA.WORLD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS =
(PROTOCOL = IPC)
(KEY = EXTPROC)
)
)
(CONNECT_DATA = (SID = PLSExtProc)
(PRESENTATION = RO)
)
)
PV.WORLD =
(DESCRIPTION =
(ENABLE=BROKEN)
(ADDRESS_LIST =
(ADDRESS =
(PROTOCOL = TCP)
(HOST = delphi)
(PORT = 1521)
)
)
(CONNECT_DATA =
Note: If either test is not successful, check your configuration and retest.
The deployer and Topology Editor use Oracle 12.1.0.2 client (32-bit). The deployer
must run on the Host DB server. Therefore, in the Tivoli Netcool Performance
Manager, Oracle 12.1.0.2 client (32-bit) must be installed.
You must specify a different directory path for the Oracle 12.1.0.2 client (32-bit)
from the directory specified when installing the Oracle 12.1.0.2 server. For example,
assuming that Oracle 12.1.0.2 server is installed in /opt/oracle/product/12.1.0
then you must install the Oracle 12.1.0.2 client (32-bit) in /opt/oracle/product/
12.1.0-client32.
You must install Oracle client 32-bit on all machines except for the server that
hosts the Tivoli Netcool Performance Manager database server. Should there be
another Tivoli Netcool Performance Manager component on the server hosting the
Tivoli Netcool Performance Manager database server, the client must also be
installed on that server, having a separate ORACLE_HOME for both Oracle client
and server.
Note: Oracle 12.1.0.2 client (32-bit) must be installed into a new ORACLE_HOME.
Procedure
1. Log in as root.
2. Set the DISPLAY environment variable.
3. Create a directory to hold the contents of the Oracle distribution. For example:
# mkdir /var/tmp/oracle12102
4. Download the Oracle files to the /var/tmp/oracle12102 directory.
5. Extract the Oracle distribution files that now exist in the /var/tmp/oracle12102
directory.
6. Create a soft link for the client32 folder from the
Oracle distribution files directory:
# cd /var/tmp/oracle12102
# ln -s client32 client
Results
The directory created and to which the Oracle 12.1.0.2 distribution is downloaded
from now on be referred to by using <ORA_DIST_DIR>.
What to do next
Before you proceed to the next step, make sure that you obtain the upgrade
instructions provided by Oracle. The instructions contain information on
performing steps that are required for the upgrade that are not documented in this
guide.
This script is named configure_client and is located with the Tivoli Netcool
Performance Manager files downloaded as part of the Tivoli Netcool Performance
Manager distribution. If you are performing this step as part of an upgrade
The client configuration script makes the following changes to the local system:
v Adds the dba and oinstall groups to /etc/group.
v Adds the Solaris login name oracle, whose primary group membership is dba,
and secondary group membership is oinstall.
Note: You must use the same Oracle username across all instances of Oracle
Client and Server throughout your Tivoli Netcool Performance Manager system.
v Creates the Oracle client directory structure. When you create the environment
for Oracle 12.1.0.2, the default location for this directory structure is
/opt/oracle/product/12.1.0-client32. You specify this directory as the target
location when you install the Oracle client.
Note: None of the above changes are made if the changes are already in place.
Procedure
1. Log in as root.
2. Set the ORACLE_BASE environment variable to point to the top-level directory
where you want the Oracle client files installed. The default installation
directory is /opt/oracle.
For example:
# ORACLE_BASE=/opt/oracle
# export ORACLE_BASE
Note: The configure_client script places this variable into the oracle login
name's .profile file.
To check that the variable is set correctly, enter the following command:
# env | grep ORA
3. Set the ORACLE_HOME environment variable. For example:
# ORACLE_HOME=/opt/oracle/product/12.1.0-client32
# export ORACLE_HOME
Note: The value defined in the configure_client script for ORACLE_HOME is the
value needed in the Topology Editor for Oracle Home on the host level.
4. Change to the following directory:
systems:
# cd <DIST_DIR>/proviso/SOLARIS/DataBase/SOL10/oracle/instance
systems:
# cd <DIST_DIR>/proviso/AIX/DataBase/AIX<version_num>/oracle/instance
systems:
# cd <DIST_DIR>/proviso/RHEL/DataBase/RHEL<version_num>/oracle/instance
where:
<DIST_DIR> is the directory on the hard drive where you copied the contents of
the Tivoli Netcool Performance Manager distribution in “Download the Oracle
distribution to disk” on page 127.
5. Run the Oracle configuration script using the following command:
ORACLE_BASE .. : [ /opt/oracle ]
ORACLE_HOME .. : [ /opt/oracle/product/12.1.0-client32]
DBA group ................. : [ dba ]
OUI Inventory group ....... : [ oinstall ]
Oracle Software owner ..... : [ oracle ]
Menu :
1. Modify Oracle software owner.
2. Next supported release.
3. Check environment.
0. Exit
Choice :
The configure_ora script that you ran in the previous section creates the oracle
login name. You must assign a password for the oracle login name to maintain
system security, and because subsequent installation steps expect the password to
be already set.
To set a password:
Procedure
1. Log in as root.
2. Enter the following command:
# passwd oracle
3. Enter and reenter the password (oracle, by default) as prompted. The
password is set.
Procedure
1. As root, change directory using the following command:
systems:
# cd <DIST_DIR>/proviso/SOLARIS/DataBase/SOL10/oracle/instance/ora_installer
systems:
# cd <DIST_DIR>/proviso/AIX/DataBase/AIX<version_num>/oracle/instance/ora_installer
systems:
# cd <DIST_DIR>/proviso/RHEL/DataBase/RHEL<version_num>/oracle/instance/ora_installer
where:
<DIST_DIR> is the directory on the hard drive where you copied the contents of
the Tivoli Netcool Performance Manager distribution in “Download the Oracle
distribution to disk” on page 127.
2. Set the ORACLE_BASE environment variable. For example:
# ORACLE_BASE=/opt/oracle
# export ORACLE_BASE
If the script shows an error, correct the situation causing the error before
proceeding to the next step.
Procedure
1. Log in as oracle. Set and export the DISPLAY environment variable.
If you are using the su command to become oracle, use a hyphen as the second
argument so the oracle user login environment is loaded:
# su - oracle
2. Verify that the environment variable ORACLE_BASE has been set by entering
the following command:
$ env | grep ORA
If the response does not include ORACLE_BASE=/opt/oracle, stop and make sure
the .profile file was set for the oracle user, as described in “Run the Oracle
client configuration script” on page 127.
3. To verify the path, enter the following command:
$ echo $PATH
The output must show that /usr/ccs/bin is part of the search path. For
example:
/usr/bin:/opt/oracle/product/12.1.0/bin:/usr/ccs/bin
An Oracle client installation is not usable until the following Net configuration
files are configured and installed:
v tnsnames.ora
v sqlnet.ora
Procedure
1. Log in as oracle.
2. Change to the following directory:
systems:
# cd <DIST_DIR>/proviso/SOLARIS/DataBase/SOL10/oracle/instance/ora_installer
systems:
# cd <DIST_DIR>/proviso/AIX/DataBase/AIX<version_num>/oracle/instance
/ora_installer
systems:
# cd <DIST_DIR>/proviso/RHEL/DataBase/RHEL<version_num>/oracle/instance/ora_installer
Where:
v <DIST_DIR> is the directory on the hard disk drive where you copied the
contents of the Tivoli Netcool Performance Manager distribution in
“Download the Oracle distribution to disk” on page 127.
3. Enter the following command to start the installer:
a)ORACLE_BASE .. : [ /opt/oracle ]
b)ORACLE_HOME .. : [ /opt/oracle/product/12.1.0-client32 ]
c)DBA group ..................... : [ dba ]
d)OUI Inventory group ........... : [ oinstall ]
e)Oracle Software owner ......... : [ oracle ]
f)Directory where CDs were copied:
[ ]
Menu :
1. Next supported release
2. Set install type to: Client
3. Perform install
0. Exit
Choice :
Note:
a. The ORACLE_HOME must match the ORACLE_HOME set in the deployer for the
server on which the client is being installed.
b. Make sure the listener process is down before you start installing Oracle
12.1.0.2 client (32-bit).
4. Enter f at the Choice prompt and press Enter.
5. Enter the full path to the <ORA_DIST_DIR>, as created in “Download the
Oracle distribution to disk” on page 127. For example:
Choice: f
Enter new value for CD directory: /var/tmp/oracle12102
6. Edit any other menu settings as necessary. Make sure that the values for
ORACLE_BASE and ORACLE_HOME correspond to the locations you specified when
you ran the Oracle client configuration script.
7. To start the Oracle installation, type 3 at the Choice prompt and press Enter.
8. The installation script checks the environment, then asks whether you want to
perform the installation. Type Y at the Choice prompt and press Enter. The
installation script starts installing Oracle and displays a series of status
messages.
Note: You can safely ignore any font.properties not foundmessages in the
output.
When the installation reaches the In Summary Page stage, the installation
slows down significantly while Oracle files are copied and linked.
This step is also required after an Oracle patch installation. See “Verify the required
operating system packages” on page 112.
Procedure
1. Log in as root or become superuser. Set the DISPLAY environment variable.
2. Change to the directory where Oracle client files were installed. (This is the
directory as set in the ORACLE_HOME environment variable.) For example:
# cd /opt/oracle/product/12.1.0-client32
3. Run the following command:
./root.sh
Messages like the following are displayed:
Running Oracle12c root.sh script...
# ./root.sh
Check /opt/oracle/product/12.1.0-client32/install/root_
<server_hostname>_2014-03-18_13-45-04.log for the output of root script
File contents:
Performing root user operation for Oracle 12c
Procedure
1. On the primary host server that hosts the Tivoli Netcool Performance Manager
database server, make sure that ORACLE_HOME points to $ORACLE_BASE/product/
12.1.0.
2. On the secondary host servers, that hosts any of the Tivoli Netcool Performance
Manager components, make sure that ORACLE_HOME points to
$ORACLE_BASE/product/12.1.0-client32.
3. If there is not already an entry for TNS_ADMIN, add one.
TNS_ADMIN=$ORACLE_HOME/network/admin
When complete, the .profile must look similar to:
# -- Begin Oracle Settings --
umask 022
ORACLE_BASE=/opt/oracle
ORACLE_HOME=$ORACLE_BASE/product/12.1.0
NLS_LANG=AMERICAN_AMERICA.WE8ISO8859P1
ORA_NLS33=$ORACLE_HOME/ocommon/nls/admin/data
LD_LIBRARY_PATH=$ORACLE_HOME/lib
TNS_ADMIN=$ORACLE_HOME/network/admin
PATH=$PATH:$ORACLE_HOME/bin:/usr/ccs/bin:/usr/local/bin
EXTPROC_DLLS=ONLY:${LD_LIBRARY_PATH}/libpvmextc.so
ORACLE_SID=PV
Export ORACLE_SID
4. Source the .profile file to apply the changes by using the following command:
$ cd /opt/oracle.
$ ./.profile
Next, you configure the Oracle Net client by setting up the TNS (Transport
Network Substrate) service names for your Tivoli Netcool Performance Manager
database instance. You must perform this step for each instance of the Oracle client
software that you installed on the system.
Procedure
v You must configure sqlnet.ora and tnsnames.ora files for both Oracle server
and Oracle client installations. However, the tnsnames.ora file for client
installations must not have the EXTPROC_CONNECTION_DATA section.
v If you are installing DataView and one or more other Tivoli Netcool
Performance Manager components on the same system, you must make sure
that the tnsnames.ora and sqlnet.ora files for each set of client software are
Procedure
1. Log in as oracle.
2. Change the directory to Oracle 12.1.0.2 client (32-bit):
$ cd /opt/oracle/product/12.1.0-client32/network/admin
3. Create the sqlnet.ora file, which manages Oracle network operations. You
must create an sqlnet.ora file for both Oracle server and Oracle client
installations. Follow these steps:
a. Copy the sqlnet.ora file that was created earlier during the configuration
of oracle server net client, from /opt/oracle/product/12.1.0/network/admin
directory.
$ cp /opt/oracle/product/12.1.0/network/admin/sqlnet.ora .
b. Add the following lines to this file:
NAMES.DIRECTORY_PATH=(TNSNAMES)
NAMES.DEFAULT_DOMAIN=WORLD
SQLNET.ALLOWED_LOGON_VERSION_CLIENT=8
SQLNET.ALLOWED_LOGON_VERSION_SERVER=8
For example:
# sqlnet.ora network configuration file in
# /opt/oracle/product/12.1.0/network/admin
NAMES.DIRECTORY_PATH=(TNSNAMES)
NAMES.DEFAULT_DOMAIN=WORLD
SQLNET.ALLOWED_LOGON_VERSION_CLIENT=8
SQLNET.ALLOWED_LOGON_VERSION_SERVER=8
Note: If you do not use WORLD as the DEFAULT_DOMAIN value, make sure that
you enter the same value for DEFAULT_DOMAIN in both sqlnet.ora and
tnsnames.ora.
c. Write and quit the sqlnet.ora file.
d. Restart the listener process by using the following commands:
lsnrctl stop
lsnrctl start
Look for a successful completion message.
You can create a new tnsnames.ora file, or FTP the file from your Oracle server.
Procedure
1. FTP the following file from your Oracle server:
/opt/oracle/admin/skeleton/bin/template.example_tnpm.tnsnames.ora
2. Add the following lines:
# tnsnames.ora network configuration file in
# /opt/oracle/product/12.1.0/network/admin
#
# For Oracle client installations, tnsnames.ora
# only needs the PV.WORLD entry.
PV.WORLD =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS =
(PROTOCOL = TCP)
(HOST = yourhost)
(PORT = 1521)
)
)
(CONNECT_DATA =
(SERVICE_NAME = PV.WORLD)
(INSTANCE_NAME = PV)
)
)
PV.WORLD =
(DESCRIPTION =
(ENABLE=BROKEN)
(ADDRESS_LIST =
(ADDRESS =
PVR.WORLD =
(DESCRIPTION =
(ENABLE=BROKEN)
(ADDRESS_LIST =
(ADDRESS =
(PROTOCOL = TCP)
(HOST = yourhost)
(PORT = 1521)
)
)
(CONNECT_DATA =
(SERVICE_NAME = PVR.WORLD)
(INSTANCE_NAME = PVR)
)
)
7. Write and quit the file.
Procedure
1. Log in as oracle.
2. Enter a command with the following syntax:
tnsping Net_service_name 10
For example: tnsping PV.WORLD 10
3. Test again, using the same Net instance name without the domain suffix:
tnsping PV 10
Look for successful completion messages (OK).
Procedure
1. Log in as root.
2. Set the environment variable to # ORACLE_HOME=/opt/oracle/product/12.1.0-
client32
3. Run the SQLPlus by using the sqlplus from the Oracle 32-bit client version. For
example:
The DB2 Server 10.1.0.5 (64-bit) version provides both 64-bit and 32-bit libraries.
Therefore, you need not install the DB2 client on Tivoli Netcool Performance
Manager Database server. However, DB2 client is required on every other host
where you want to install the Tivoli Netcool Performance Manager components.
Important: In stand-alone environment, you do not need to install the DB2 Server
10.1.0.5 separately. You can use the DB2 server that you installed with Jazz for
Service Management.
Related information:
DB2 Version 10.1 Fix Pack 5 for Linux, UNIX, and Windows
Procedure
1. Log in as root.
2. Set the DISPLAY environment variable.
3. Create a directory to hold the contents of the DB2 distribution. For example:
# mkdir /var/tmp/db2setup1010
4. Download the DB2 files to the /var/tmp/db2setup1010 directory.
5. Extract the DB2 distribution files that now exist in the /var/tmp/db2setup1010
directory.
What to do next
Procedure
1. Make sure all the required Linux packages are installed on your system. All
packages and patches are specified in the Configuration Recommendations.
2. If these packages are not on your system, see the relevant operating system
Installation Guide for instructions on installing supplementary package
software.
Procedure
1. Change to the directory where the DB2 database product distribution is copied
by entering the following command:
cd <DB2_DIST_DIR>
2. If you have downloaded the DB2 database product image, extract the product
file by using the following commands:
gunzip <product>.tar.gz
tar -xvf product.tar
***********************************************************
Install into default directory (/opt/ibm/db2/V10.1) ? [yes/no]
no
Enter the full path of the base installation directory:
------------------------------------------------
/opt/db2/product/10.1.0
AESE
ESE
CONSV
WSE
EXP
CLIENT
RTCL
***********************************************************
ESE
******************************************************
Do you want to install the DB2 pureScale Feature? [yes/no]
no
Task #1 start
Description: Checking license agreement acceptance
Estimated time 1 second(s)
Task #1 end
Task #2 start
Description: Base Client Support for installation with root privileges
Estimated time 3 second(s)
Task #2 end
Task #3 start
Description: Product Messages - English
Estimated time 13 second(s)
Task #3 end
Task #4 start
Description: Base client support
Estimated time 235 second(s)
Task #4 end
Task #5 start
Description: Java Runtime Support
Estimated time 153 second(s)
Task #5 end
Task #6 start
Description: Java Help (HTML) - English
Estimated time 7 second(s)
Task #6 end
.
.
.
.
.
.
The execution completed successfully.
Note: The output above shows that the total number of tasks to be performed
are 46. But the installation log shows 48 tasks. This is a known limitation.
Results
Procedure
Use the db2icrt command to create an instance by using the following steps:
Procedure
Ensure that the DB2_db2 service-name is using port 60000. If not, update the
following line in /etc/services file:
DB2_db2 60000/tcp
Note: Here the DB2 port is hardcoded to use 60000 since the default DB2 port
number 50000 is preoccupied by the DB2 that is installed with Jazz for Service
Management.
Results
If instance is not running, then the command will start the instance. If it is already
running, then you may receive following message:
The database manager is already active
In a Tivoli Netcool Performance Manager system, you must install the Data Server
Client (64-bit) in a distributed environment only. It is not required in a stand-alone
environment. In a distributed environment, install the DB2 client software on each
server where you plan to install a Tivoli Netcool Performance Manager component,
except for the system where you installed the DB2 server.
Instructions on how to install the IBM Data Server Client 10.1.0.5 (64-bit).
Before you begin this task, make sure that you have:
v Downloaded the Tivoli Netcool Performance Manager distribution to disk.
The directory to which the Tivoli Netcool Performance Manager distribution is
downloaded is referred to as <DIST_DIR>.
Procedure
1. Log in as root.
2. Create a directory to hold the contents of the Data Server Client distribution.
For example:
# mkdir /var/tmp/db2setup1010
3. Download the Data Server Client files to the /var/tmp/db2setup1010 directory.
4. Extract the Data Server Client distribution files that now exist in the
/var/tmp/db2setup1010 directory. The directory to which the IBM Data Server
Client 10.1.0.5 distribution is downloaded is referred to as <DB2_DIST_DIR>.
Procedure
1. Change to the directory where the DB2 database product distribution is copied
by entering the following command:
cd <DB2_DIST_DIR>
2. If you have downloaded the DB2 database product image, extract the product
file by using the following commands:
gzip <product>.tar.gz
tar -xvf product.tar
***********************************************************
Install into default directory (/opt/ibm/db2/V10.1) ? [yes/no]
no
Enter the full path of the base installation directory:
------------------------------------------------
/opt/db2/product/10.1.0
AESE
ESE
CONSV
WSE
EXP
CLIENT
RTCL
***********************************************************
CLIENT
***********************************************************
Task #1 start
Description: Checking license agreement acceptance
Estimated time 1 second(s)
Task #1 end
Task #2 start
Description: Base Client Support for installation with root privileges
Estimated time 3 second(s)
Task #2 end
Task #3 start
Description: Product Messages - English
Estimated time 13 second(s)
Task #3 end.
.
.
.
.
.
The execution completed successfully.
Results
Use the db2icrt command to create an instance by using the following steps:
These steps must be performed on every host that has the IBM Data Server Client
installed.
Procedure
1. Log in with the instance user that is db2.
2. Run the following commands:
db2 catalog tcpip node <node_name> remote <DB2_Server_Host> server 60000
db2 catalog db <DB_NAME> at node <node_name>
Results
When you complete the catalog settings, you might receive the following warning
message:
DB21056W Directory changes may not be effective until the directory cache is refreshed
Procedure
1. Log in as root user and run following commands:
Where,
The <DIST_DIR> is the directory path where you downloaded the Tivoli
Netcool Performance Manager distribution. For example,/var/tmp/cdproviso/.
Next steps
The steps that follow installation of the prerequisite software.
Once you have installed the prerequisite software, you are ready to begin the
actual installation of Tivoli Netcool Performance Manager. Depending on the type
of installation you require, follow the directions in the appropriate chapter:
v If you are planning on Installing Tivoli Netcool Performance Manager as a
distributed environment that uses clustering for high availability, review the
Tivoli Netcool Performance Manager High Availability documentation, which is
available for download by going to https://www-304.ibm.com/software/
brandcatalog/ismlibrary/details?catalog.label=1TW10NP54 and searching for
"Netcool Proviso High Availability Documentation".
This section describes how to install Tivoli Netcool Performance Manager for the
first time in a fresh, distributed environment.
Note:
v Before installing Tivoli Netcool Performance Manager, ensure if JRE 1.7 is
installed. If this prerequisite is not installed, the Dataview installation actually
fails and the install DataView step incorrectly shows as passed.
Important: If you are using a Solaris platform for installing Tivoli Netcool
Performance Manager, ensure that Jazz™ for Service Management is either installed
on Linux or AIX systems. Jazz™ for Service Management installation is not
supported on Solaris platform.
In addition, you must have decided how you want to configure your system. Refer
to the following sections:
v “Co-location rules” on page 3
v “Typical installation topology” on page 26
v Appendix A, “Remote installation issues,” on page 235
Note: Before you start the installation, verify that all the database tests have been
performed. Otherwise, the installation might fail. See Chapter 3, “Installing and
configuring the prerequisite software,” on page 67 for information about tnsping.
Procedure
1. Log in as root.
2. Set and export the DISPLAY variable.
See “Setting up a remote X Window display” on page 70.
3. Set and export the BROWSER variable to point to your Web browser. For
example:
systems:
# BROWSER=/usr/bin/firefox
# export BROWSER
systems:
# BROWSER=/usr/mozilla/firefox/firefox
# export BROWSER
systems:
# BROWSER=/usr/bin/firefox
# export BROWSER
Note: The BROWSER command cannot include any spaces around the equal
sign.
4. Change directory to the directory where the launchpad resides.
systems:
# cd <DIST_DIR>/ proviso/SOLARIS
systems:
# cd <DIST_DIR>/proviso/AIX
Only one instance of the Topology Editor can exist in the Tivoli Netcool
Performance Manager environment. Install the Topology Editor on the same
system that hosts database server.
You can install the Topology Editor from the launchpad or from the command line.
Procedure
1. You can begin the Topology Editor installation procedure from the command
line or from the Launchpad.
From the launchpad:
a. On the launchpad, click the Install Topology Editor option in the list of
tasks.
b. On the Install Topology Editor page, click the Install Topology Editor link.
From the command line:
a. Log in as root.
b. Change directory to the directory that contains the Topology Editor
installation script:
systems:
# cd <DIST_DIR>/proviso/SOLARIS/Install/SOL10/topologyEditor/Disk1/InstData/VM
systems:
# cd <DIST_DIR>/proviso/AIX/Install/topologyEditor/Disk1/InstData/VM
systems:
# cd <DIST_DIR>/proviso/RHEL/Install/topologyEditor/Disk1/InstData/VM
<DIST_DIR> is the directory on the hard disk where you copied the
contents of the Tivoli Netcool Performance Manager distribution.
For more information, see “Downloading the Tivoli Netcool Performance
Manager distribution to disk” on page 87.
c. Enter the following command:
# ./installer.bin
2. The installation wizard opens in a separate window, displaying a welcome
page. Click Next.
v
/opt/oracle/product/12.1.0-client32/jdbc/lib
v
/opt/db2/product/10.1.0/java
8. Click Next to continue.
9. Review the installation information, then click Install.
10. When the installation is complete, click Done to close the wizard.
The installation wizard installs the Topology Editor and an instance of the
deployer in the following directories:
Interface Directory
Topology Editor install_dir/topologyEditor
For example:
/opt/IBM/proviso/topologyEditor
Deployer install_dir/deployer
For example:
/opt/IBM/proviso/deployer
Results
The combination of the Topology Editor and the deployer is referred to as the
primary deployer.
Note: To uninstall the Topology Editor, follow the instructions in “Uninstalling the
Topology Editor” on page 230. Do not delete the /opt/IBM directory. Doing so can
cause problems when you try to reinstall the Topology Editor.
Once you have installed Topology Editor, you need to extract the
<DIST_DIR>/license.tar to <install_dir>/license manually.
For example:
$ cp <DIST_DIR>/license.tar /opt/IBM/proviso/license
$ tar -xvf license.tar
Procedure
v To start the Topology Editor from the launchpad:
1. If the Install Topology Editor page is not already open, click the Install
Topology Editor option in the list of tasks to open it.
2. On the Install Topology Editor page, click the Start Topology Editor link.
Note: For a non-default installation, you are prompt to enter the path to the
location where the Topology Editor is installed.
v To start the Topology Editor from the command line:
1. Log in as root.
2. Change directory to the directory in which you installed the Topology Editor.
For example:
# cd /opt/IBM/proviso/topologyEditor
3. Enter the following command:
# ./topologyEditor
Note: If your DISPLAY environment variable is not set, the Topology Editor
fails with a Java assertion message (core memory dump).
If you are running the Topology Editor for an AIX 6.1 or AIX 7.1
environment, use the command:
# ./topologyEditor -vm /opt/IBM/proviso/topologyEditor/jre/bin/java
Procedure
1. In the Topology Editor, select Topology > Create new topology.
The New Topology window is displayed.
2. Enter the Number of resources to be managed by Tivoli Netcool Performance
Manager.
Note: When you install with non-default values, that is, non-default user names,
passwords and locations, it is adviced that you check both the Logical view and
Physical view to ensure that they both contain the correct values before you
proceed with the installation.
Each host that you define has an associated property named PV User. PV User is
the default operating system for all Tivoli Netcool Performance Manager
components.
You can override this setting in the Advanced Properties tab when you set the
deployment properties for individual components (for example, DataMart and
DataView). You can install and run different components on the same system as
different users.
Note: DataChannel components always use the default user that is associated with
the host.
The user account that is used to transfer files that are used FTP or SCP/SFTP during
installation is always the PV User that is defined at the host level, rather than
component level.
Procedure
1. In the Physical view, right-click the Hosts folder and select Add Host from the
menu. The Add Host window opens.
2. Specify the details for the host computer.
The fields are as follows:
v HOSTNAME - Enter the name of the host (for example, delphi).
v Operating System - Specifies the operating system (for example, SOLARIS).
This field is completed for you.
v Database Home - Specifies the default Oracle or DB2 database home
directory for all Tivoli Netcool Performance Manager components that are
installed on the system:
– The default directory for ORACLE_HOME is /opt/oracle/product/12.1.0-
client32
Procedure
1. In the Physical view, right-click the Hosts folder and select Add Multiple Host
from the menu. The Add Multiple Hosts window opens.
2. Add new hosts by typing their names into the Host Name field as a
comma-separated list.
3. Click Next.
156 IBM Tivoli Netcool Performance Manager: Installation Guide
4. Configure all added hosts.
With Configure hosts dialog you can enter configuration settings and apply
these settings to one or more of the specified hosts set.
To apply configuration settings to one or more of the specified host sets:
a. Enter the appropriate host configuration values. All configuration options
are described in Steps 2, and 3 of the previous process, “Adding the hosts”
on page 155.
b. Select the check box opposite each of the hosts to which you want to apply
the entered values.
c. Click Next. The hosts for which all configuration settings that are specified
disappear from the set of selectable hosts.
d. Repeat steps a, b, and c until all hosts are configured.
5. Click Finish.
You define the parameters once, and their values are propagated as needed to the
underlying installation scripts.
Procedure
1. In the Logical view, right-click the Tivoli Netcool Performance Manager
Topology component and select Add Database Configurations from the menu.
The host selection window opens.
2. You must add the Database Configuration component to the same server that
hosts the database server (for example, delphi). Select the appropriate host
using the drop-down list.
3. Click Next to configure the mount points for the database.
4. Add the correct number of mount points.
To add a new mount point, click Add Mount Point. A new, blank row is added
to the window. Fill in the fields as appropriate for the new mount point.
5. Enter the required configuration information for each mount point.
a. Enter the mount point location:
v Mount Point Directory Name, for example:
– /raid_2/oradata
– /raid_2/db2data
Note: The mount point directories can be named using any string as
required by your organizations naming standards.
v Used for Metadata Tablespaces? (A check mark indicates True.)
v Used for Temporary Tablespaces? (A check mark indicates True.)
v Used for Metric Tablespaces? (A check mark indicates True.)
v Used for System Tablespaces and Redo? (A check mark indicates True.)
b. Click Back to return to the original page.
c. Click Finish to create the component.
Chapter 6. Installing in a distributed environment 157
The Topology Editor adds the new Database Configurations component to the
Logical view.
6. Highlight the Database Configurations component to display its properties.
Review the property values to make sure they are valid. For the complete list
of properties for this component, see the Properties Reference.
The Database Configurations component has the following subelements:
v Channel tablespace configurations
v Database Channels
v Database Clients configurations
v Tablespace configurations
v Temporary tablespace configurations
Note: Before you install Tivoli Netcool Performance Manager, verify that the
following directory structures are created:
Adding a DataMart
The steps required to add a DataMart component to your topology.
Procedure
1. In the Logical view, right-click the DataMarts folder and select Add DataMart
from the menu. The host selection host window is displayed.
2. Using the drop-down list of available hosts, select the machine on which
DataMart must be installed (for example, delphi).
The scripts are called as needed by tablespace size checking routines in Oracle or
DB2 and in Tivoli Netcool Performance Manager, if either routine detects low disk
space conditions on a disk partition hosting a portion of the Tivoli Netcool
Performance Manager database. Both scripts by default send their notifications by
e-mail to a local login name.
The two files and their installation locations for the two databases are as follows:
Either file for both databases can be customized to send its warnings to a different
e-mail address on the local machine, to an SMTP server for transmission to a
remote machine, or to send the notices to the local network's SNMP fault
management system (that is, to an SNMP trap manager). You can modify either
script to send notifications to an SNMP trap, instead of, or in addition to its
default e-mail notification.
You can add a discovery server for each DataMart defined in the topology.
Procedure
In the Logical view, right-click the DataMart x folder and select Add Discovery
server from the menu.
The Topology Editor displays the new Discovery Server under the DataMart n
folder in the Logical view.
The inventory files used by the Discovery Server are configuration files named
inventory_elements.txt and inventory_subelements.txt. These files are located in
the $PVMHOME/conf directory of the system where you install the DataMart
component. Some technology packs provide custom sub-elements inventory files
with names different from inventory_subelements.txt that are also used by the
Discovery Server.
Procedure
v Install the primary instance of DataMart and the Discovery Server on one target
host system.
v Install and configure any required technology packs on the primary host. You
modify the contents of the inventory files during this step.
v Install secondary instances of DataMart and the Discovery Server on
corresponding target host systems.
v Replicate the inventory files from the system where the primary instance of
DataMart is running to the $PVMHOME/conf directory on the secondary hosts. You
must also replicate the InventoryHook.sh script that is located in the
$PVMHOME/bin directory and any other files that this script requires.
This step runs an asynchronous check for existing Dashboard Application Services
Hub on each selected DataView host. If a Dashboard Application Services Hub is
discovered on a host, the discovered Dashboard Application Services Hub detail is
added to the topology.
Procedure
1. In the Physical view, right-click the Hosts folder and select Add Host from the
menu. Add the host that has an existing Dashboard Application Services Hub
you want to discover.
2. Go to the Logical view, right-click on the DASHs folder and select Import
existing Dashboard Application Services Hubs from host from the menu. The
Run Dashboard Application Services Hub Discovery Wizard Page is
displayed.
3. Select the check box for each host on which you want the Dashboard
Application Services Hub discovery.
4. Enter the value for DASH Installation User and DASH Installation Password.
You can modify this value if you want to do the Dashboard Application
Services Hub installation with non-root user. For example,
DASH_INSTALLATION_USER = ncadmin. For more information about, see Installing
as a root user or non-root user.
5. Click Import DASH.
If the discovered Dashboard Application Services Hub is an old version, it is
flagged within the topology for upgrade.
The deployer takes the appropriate action when run. You find that the
discovered DASH status as "[DASH found on: <host name>]".
6. Click Next.
7. Configure Dashboard Application Services Hub properties.
a. Enter the appropriate host configuration values.
TCR_INSTALLATION_DIRECTORY
The directory in which Tivoli Common Reporting is installed.
DASH_INSTALLATION_DIRECTORY
The directory in which DASH is installed.
Note: The Tivoli Netcool Performance Manager user or Proviso user (for
example, pvuser). In a non-root DASH orTivoli Common Reporting and
DataView installation, the value of Tivoli Netcool Performance Manager user or
Proviso user must be the non-root user (for example, ncadmin). For more
information about the Technology Pack/App Pack, refer to Installing and
Configuring Technology Packs.
Adding a DataView
The steps that are required to add DataView components.
Note: To display DataView real-time charts, you must have the Java runtime
environment (JRE) installed on the browser where the charts are to be displayed.
You can download the JRE from the Oracle and Sun Microsystems download page
at http://www.oracle.com/technetwork/java/javase/downloads/index.html
Note: If you are reusing an existing DASH that was installed by a user other than
root, the default deployment of DataView will encounter problems. To avoid these
problems, you must remove the offending DASH from your topology and add
both the DASH and DataView as a separate post deployment step.
Procedure
In the Logical view, right-click on a DASH and select Add DataView from the
menu. The DataView is automatically added inheriting its properties from the
DASH instance.
Note:
v When you install non-root DataView on remote host, it asks the password of
non-root user.
v Tivoli Netcool Performance Manager Data Provider is installed as part of Tivoli
Netcool Performance Manager 1.4.2 DataView component installation. After you
complete the installation of Tivoli Netcool Performance Manager, see Using Tivoli
Netcool Performance Manager Data Provider 1.4.2 guide to start using the Data
Provider.
Procedure
1. In the Logical view, right-click the DataChannels folder and select Add
Administrative Components from the menu. The host selection window opens.
2. Using the drop-down list of available hosts, select the machine that you want
to be the Channel Manager host for your DataChannel configuration (for
example, corinth).
3. Click Finish.
The Topology Editor adds a set of new components to the Logical view:
Channel Manager
Enables you to start and stop individual DataChannels and monitor the
state of various DataChannel programs. There is one Channel Manager
for the entire DataChannel configuration. The Channel Manager
components are installed on the first host you specify.
CORBA Naming Server
Provides near real-time data to DataView.
High Availability Managers
This is mainly used for large installations that want to use redundant
SNMP collection paths. The HAM constantly monitors the availability
of one or more SNMP collection hosts, and switches collection to a
backup host (called a spare) if a primary host becomes unavailable.
Log Server
Used to store user, debug, and error information.
Plan Builder
Creates the metric data routing and processing plan for the other
components in the DataChannel.
Custom DataChannel properties
These are the custom property values that apply to all DataChannel
components.
Global DataChannel properties
These are the global property values that apply to all DataChannel
components.
Adding a DataChannel
A DataChannel is a software module that receives and processes network statistical
information from both SNMP and non-SNMP (BULK) sources.
This statistical information is then loaded into a database where it can be queried
by SQL applications and captured as raw data or displayed on a portal in a variety
of reports.
Typically, collectors are associated with technology packs, a suite of Tivoli Netcool
Performance Manager programs specific to a particular network device or
technology. A technology pack tells the collector what kind of data to collect on
target devices and how to process that data. See the Technology Pack Installation
Guide for detailed information about technology packs.
Procedure
1. In the Logical view, right-click the DataChannels folder and select Add
DataChannel from the menu. The Configure the DataChannel window is
displayed.
2. Using the drop-down list of available hosts, select the machine that will host
the DataChannel (for example, corinth).
3. Accept the default channel number (for example, 1).
4. Click Finish.
The Topology Editor adds the new DataChannel (for example, DataChannel 1)
to the Logical view.
5. Highlight the DataChannel to display its properties. Note that the DataChannel
always installs and runs as the default user for the host (the Tivoli Netcool
Performance Manager UNIX username, pvuser). Review the other property
values to make sure they are valid. For the complete list of properties for this
component, see the Properties Reference Guide.
The DataChannel has the following subelements:
Daily Loader x
Processes 24 hours of raw data every day, merges it together, then loads
it into the database. The loader process provides statistics on metric
channel tables and metric tablespaces.
Hourly Loader x
Reads files output by the Complex Metric Engine (CME) and loads the
data into the database every hour. The loader process provides statistics
on metric channel tables and metric tablespaces.
The Topology Editor includes the channel number in the element names. For
example, DataChannel 1 would have Daily Loader 1 and File Transfer Engine 1.
Note: When you add DataChannel x, the Problems view shows that the
Input_Components property for the Hourly Loader is blank. This missing value
will automatically be filled in when you add a DataLoad collector (as described
in the next section) and the error will be resolved.
Note: Separating the data and executable directories is only possible during the
first installation activity. After the installation, you cannot modify the topology to
separate the data and the executable directories.
Do the following, if you want to separate the data and executable directories for
your DataChannel:
Procedure
1. Create two directories on the DataChannel host, for example, DATA_DIR to hold
the data and EXE_DIR to hold the executable.
2. Change the LOCAL_ROOT_DIRECTORY value on that host 's Disk Usage Server to
the data root folder DATA_DIR.
The following task assumes that we are placing the LDR and DLDR on the current
host, called, for example, hostname1; and that we are placing the subchannel, CME
and FTE, on another host, called, for example, hostname2.
Procedure
1. If it is not already open, open the Topology Editor (see “Starting the Topology
Editor” on page 154).
2. In the Topology Editor, select Topology > Open existing topology. The Open
Topology window is displayed.
3. For the topology source, select From database and click Next.
4. In the Physical View. Add a new host, hostname2, to the downloaded topology.
5. In the Logical view, right-click the DataChannels folder and select Add
DataChannel from the menu. The Configure the DataChannel window is
displayed.
6. Using the drop-down list of available hosts, select the computer that hosts the
DataChannel, hostname1.
7. Accept the default channel number (for example, 1).
8. Click Finish.
The Topology Editor adds the new DataChannel (for example, DataChannel2)
to the Logical view.
9. Right-click on the new DataChannel, DataChannel2, and select Add SNMP
Collector.
10. Select server hostname2 as the host.
The Collector 2.2 is added.
11. Right-click on the Complex Metric Engine.2.2 and choose Change Host.
12. Select server hostname2 as the host.
The File Transfer Engine 2.2 is added to hostname2.
This accomplishes FTE and CME to be on one server and LDR and DLDR to be on
another server.
Adding a Collector
Collectors collect and process raw statistical data about network devices obtained
from various network resources.
The collectors send the received data through a DataChannel for loading into the
Tivoli Netcool Performance Manager database. Note that collectors do not need to
be on the same machine as the database server and DataMart.
Collector types
Collector types and their description, plus the steps required to associate a
collector with a Technology Pack.
Procedure
1. Install Tivoli Netcool Performance Manager, without creating the UBA collector.
2. Download and install the technology pack.
3. Open the deployed topology file to load the technology pack and add the UBA
collector for it.
Note: For detailed information about UBA technology packs and the
installation process, see the Installing and Configuring Technology Packs.
Configure the installed pack by following the instructions in the pack-specific
user's guide.
Restrictions
There are a number of collector restrictions that must be noted.
Procedure
1. Launch DataMart.
2. Click Collector Information from the Monitor tab. This window lists all the
collectors that are loaded from the database.
3. Select a collector from the list, and then click the other tabs to review specific
collector information. The following collector information is available:
Table 17. List of collectors
Column Description
Number Indicates the collector identifier number.
Status Indicates the collector status. For example,
"Running" or "Not Running."
Server Indicates the host name or IP address of the
system on which the collector is installed.
Port Indicates the communication port number
for the collector.
Procedure
1. In the Logical view, right-click the DataChannel x folder.
The pop-up menu lists the following options:
v Add Collector SNMP - Creates an SNMP collector.
v Add Collector UBA - Creates a UBA collector.
v Add Collector BCOL - Creates a BCOL collector. This collector type is used in
custom technology packs. DataMart must be added to the topology before a
BCOL collector can be added.
Select Add Collector SNMP. The Configure Collector window is displayed.
2. Using the drop-down list of available hosts on the Configure Collector window,
select the computer that hosts the collector . (for example, corinth)
3. Accept the default collector number (for example, 1).
The default collector number is seen in the collector number property.
However, you need not accept the collector number. You can edit the collector
number and port number as per your requirement. The collector number
ranges from 1-999 of which 500 and 501 are restricted for other purpose. The
port number ranges from 3002 to 3999 and 3003 by default is the watched
process.
4. Repeat the steps from 2 to 4 for as many SNMP collectors you want to add.
Ensure that the collector number and port number should be unique for all the
SNMP Collectors.
5. Click Finish.
The maximum number of SNMP components (SNMP Collector and SNMP
Spare) on a host is 16.
The Topology Editor displays the new collector under the DataChannel x folder
in the Logical view.
Note: The port number property is not editable in the Topology Editor once it
is selected in the Configure Collector Window. If you want to change the port
number, you must remove the collector from the topology by right-clicking on
it and choosing remove. Then add the collector back and edit the port in the
Configure Collector Window.
Important: Irrespective of the number of collectors that you add, the directory
is created only on /opt/dataload.
Note: For information about the core parameters, see Properties Reference.
Results
The FTE writes data to the file /var/adm/wtmpx on each system that hosts a
collector. As part of routine maintenance, check the size of this file to prevent it
from growing too large.
Note: Your Solaris version can be configured with strict access default settings for
secure environments. Strict FTP access settings might interfere with automatic
transfers between a DataChannel subchannel and the DataLoad server. Check for
FTP lockouts in /etc/ftpd/ftpusers, and check for strict FTP rules in
/etc/ftpd/ftpaccess.
Note: The Topology Editor includes the channel and collector numbers in the
element names. For example, DataChannel 1 could have Collector SNMP 1.1, with
Complex Metric Engine 1.1. and File Transfer Engine 1.1.
If the collector is running, you will see output similar to the following:
pvuser 27118 1 15 10:03:27 pts/4 0:06 /opt/dataload/bin/pvmd –nologo -noherald
/opt/dataload/bin/dc.im -headless -a S
Procedure
1. Log in to the server that is running Tivoli Netcool Performance Manager SNMP
DataLoad by entering the username and password you specified when
installing SNMP DataLoad..
Note: If DataLoad shares the same server as DataMart, make sure you unset
the environment variable by issuing the following command from a BASH shell
command line:
unset PV_PRODUCT.
3. Change to the DataLoad bin directory by entering the following command:
cd $PVMHOME/bin.
4. Start a DataLoad SNMP collector using the following command:
pvmdmgr start -i <instance_no>.
To stop the DataLoad SNMP collector, use the following command:
pvmdmgr stop -i <instance_no>.
Note: You will not be able to start or stop the remote collector by using the
PVM user interface. You can use the command line interface instead.
Procedure
1. In the Logical view, right-click the Cross Collector CME folder and select Add
Cross Collector CME from the menu. The Specify the Cross Collector CME
details window is displayed.
2. Using the drop-down list of available hosts, select the machine that hosts the
Cross-Collector CME . (for example, corinth)
3. Select the desired Disk Usage Server on the selected host.
4. Select the desired channel number (for example, 1).
5. Click Finish.
The Topology Editor adds the new Cross-Collector CME (for example,
Cross-Collector CME 2000) to the Logical view.
6. Highlight the Cross-Collector CME to display its properties.
Note: The Cross-Collector CME always installs and runs as the default user for
the host (the Tivoli Netcool Performance Manager UNIX username, pvuser).
7. Review the other property values to make sure they are valid. For the complete
list of properties for this component, see the Properties Reference
8. After running the deployer to install the Cross-Collector CME you will need to
restart the CMGR process.
Note: You notice that dccmd start all does not start the Cross-Collector CME at
this point.
9. You must first deploy a formula against the Cross-Collector CME using the
DataChannel frmi tool.
Run the frmi tool. The following is an example command:
frmi ecma_formula.js -labels formula_labels.txt
Where:
v The format of formula_labels.txt is 2 columns separated by an "=" sign.
v First column is Full Path to formula.
Procedure
1. In the Logical view, right-click the Cross Collector CME folder and select Add
multiple Cross Collectors from the menu. The Add Cross Collector CME
window is displayed.
2. (Optional) Click Add Hosts to add to the set of Cross Collector hosts. Only
hosts that have a DUS can be added.
Note: It is recommended that you have 20 Cross Collector CMEs spread across
the set of topology hosts.
3. Set the number of Cross Collector CMEs for the set of hosts, there are two ways
you can do this:
v Click Calculate Defaults to use the wizard to calculate the recommended
spread across the added hosts. This sets the number of Cross Collector CMEs
to the default value.
v To manually set the number of cross collector for each host, use the
drop-down menu opposite each host name.
4. Click Finish.
Procedure
1. In the Topology Editor, select Topology then either Save Topology As or Save
Topology.
Click Browse to navigate to the directory in which to save the file. By default,
the topology is saved as topology.xml file in the topologyEditor directory.
2. Accept the default value or choose another name or location, then click OK to
close the file browser window.
3. The file name and path is displayed in the original window. Click Finish to
save the file and close the window.
Note: Until you actually deploy the topology file, you can continue making
changes to it as needed by following the directions in “Opening an existing
topology file.”
See Chapter 8, “Modifying the current deployment,” on page 187 for more
information about making changes to a deployed topology file.
Note: Only when you begin the process of deploying a topology is it saved to
the database. For more information, see “Deploying the topology” on page 174.
To open a topology file that exists but that has not yet been deployed:
Procedure
1. If it is not already open, open the Topology Editor (see “Starting the Topology
Editor” on page 154).
2. In the Topology Editor, select Topology > Open existing topology. The Open
Topology window is displayed.
3. For the topology source, click local then use Browse to navigate to the correct
directory and file. Once you have selected the file, click OK. The selected file is
displayed in the Open Topology window.
Click Finish.
The topology is displayed in the Topology Editor.
4. Change the topology as needed.
See “Resuming a partially successful first-time installation” on page 177 for more
information about the difference between primary and secondary deployers.
Note: Before you start the deployer, verify that all the database tests have been
performed. Otherwise, the installation might fail. See Chapter 3, “Installing and
configuring the prerequisite software,” on page 67 for more information.
Procedure
Note: When you use the Run menu options (install or uninstall), the deployer uses
the last saved topology file, not the current one. Be sure to save the topology file
before you use a Run command.
Secondary Deployers
A secondary deployer is only required if remote installation using the primary
deployer is not possible.
For more information on why you may need to use a secondary deployer, see
Appendix A, “Remote installation issues,” on page 235.
Procedure
v To run a secondary deployer from the launchpad:
1. On the launchpad, click Start the Deployer.
2. On the Start Deployer page, click the Start Deployer link.
v To run a secondary deployer from the command line:
1. Log in as root.
2. Change to the directory containing the deployer within the downloaded
Tivoli Netcool Performance Manager distribution:
systems:
# cd <DIST_DIR>/proviso/SOLARIS/Install/SOL10/deployer/
systems:
# cd <DIST_DIR>/proviso/AIX/Install/deployer/
systems:
# cd <DIST_DIR>/proviso/RHEL/Install/deployer/
<DIST_DIR> is the directory on the hard drive where you copied the contents
of the Tivoli Netcool Performance Manager distribution in “Downloading the
Tivoli Netcool Performance Manager distribution to disk” on page 87.
3. Enter the following command:
# ./deployer.bin
Note: See Appendix D, “Deployer CLI options,” on page 259 for the list of
supported command-line options.
The Deployer performs a check on the operating system versions and that the
minimum required packages are installed. The Deployer checks for the files as
listed in the relevant check_os.ini file.
Procedure
v To check if the required packages are installed:
1. Click Run > Run Deployer for Installation to start the Deployer.
2. Select the Check prerequisites check box.
3. Click Next.
The check returns a failure if any of the required files are missing.
v To repair a failure:
1. Log in as root.
2. Install the packages listed as missing.
3. (Linux only) If any motif package is listed as missing:
Install the missing motif package and update the package DB using the
command:
# updatedb
4. Rerun the check prerequisites step.
The deployer displays a series of pages to guide you through the Tivoli Netcool
Performance Manager installation. The installation steps are displayed in a table,
you can run each step individually or to run all the steps at a time. For more
information about the deployer interface, see “Primary Deployer” on page 173.
Procedure
1. The deployer opens, displaying a welcome page. Click Next to continue.
2. If you started the deployer from the launchpad or from the command line,
enter the full path to your topology file, or click Choose to navigate to the
correct location. Click Next to continue.
Note: If you start the deployer from within the Topology Editor, this step is
skipped.
The database access window prompts for the security credentials.
3. Enter the host name (for example, delphi) and database administrator
password (for example, PV), and verify the other values (port number, SID, and
user name). If the database does not yet exist, these parameters must match the
values you specified when you created the database configuration component
(see “Adding a database configurations component” on page 157). Click Next
to continue.
4. The node selection window shows the target systems and how the files are
transferred (see “Secondary Deployers” on page 173 for an explanation of this
window). The table has one row for each computer where at least one Tivoli
Netcool Performance Manager component is installed.
The default settings are as follows:
v The Enable check box is selected. If this option is not selected, no actions are
performed on that computer.
v If selected scripts are run to verify that the prerequisite software is installed,
the Check prerequisites check box is not selected.
v Remote execution is enabled, by using both RSH and SSH.
If remote execution cannot be enabled due to a particular customer 's
security protocols, see Appendix A, “Remote installation issues,” on page 235
and “Resuming a partially successful first-time installation” on page 177.
v File transfer by using FTP is enabled.
If wanted, reset the values as appropriate for your deployment.
Click Next to continue.
5. Provide media location details.
The Tivoli Netcool Performance Manager Media Location for components
window is displayed, listing component and component platform.
a. Click Choose the Proviso Media. You are asked to provide location of the
media for each component.
b. Enter the base directory in which your media is located. If any of the
component media is not within the directory that is specified, you are asked
to provide media location detail for that component.
6. Click Run All to run all the steps in sequence.
7. The deployer prompts you for the location of the setup files. Use the file
selection window to navigate to the top-level directory for your operating
system to avoid further prompts.
Note: This assumes that the Tivoli Netcool Performance Manager distribution
was downloaded to the folder /var/tmp/cdproviso as per the instructions in
“Downloading the Tivoli Netcool Performance Manager distribution to disk” on
page 87.
If Dataview is configured to install on a remote host, the Run Remote DASH
Install step is included. This step prompts the user to enter the root password.
The deployer requires this information in order to run as root on the remote
host and perform the Dataview installation.
8. When all the steps are completed successfully, click Done to close the wizard.
Next steps
The steps to perform after deployment.
The next step is to install the technology packs, as described in Installing and
configuring a technology pack.
Once you have created the topology and installed Tivoli Netcool Performance
Manager, it is very easy to make changes to the environment. Simply open the
deployed topology file (loading it from the database), make your changes, and run
the deployer with the updated topology file as input. For more information about
performing incremental installations, see Chapter 8, “Modifying the current
deployment,” on page 187.
Note: After your initial deployment, always load the topology file from the
database to make any additional changes (such as adding or removing a
component), because it reflects the current status of your environment. Once you
have made your changes, you must deploy the updated topology so that it is
propagated to the database. To make any subsequent changes following this
deployment, you must load the topology file from the database again.
The following example shows a cron entry that checks statistics every hour at 30
minutes past the hour. Note that the ForceCollection option is set to N, so that
statistics will only be calculated when the internal calendar determines that it is
necessary, and not every hour:
For more information on dbMgr and the analyzeMetaDataTables command, see the
Tivoli Netcool Performance Manager IBM Tivoli Netcool Performance Manager:
Database Administration Guide.
For each new SNMP DataLoad, change the env file of the Tivoli Netcool
Performance Manager user to add the directory with the openssh libcrypto so to
the LD_LIBRARY_PATH (or LIBPATH).
In this scenario, you try deploying a Tivoli Netcool Performance Manager topology
for the first time. You define the topology and start the installation. Although some
of the components of the Tivoli Netcool Performance Manager topology are
installed successfully, the overall installation does not complete successfully.
For example, suppose that during the first installation, database is not running, so
the database check failed. Stop the installation and start the database, then resume
the installation.
Procedure
1. After you have corrected the problem, restart the deployer by using the
following command:
a. Log in as root.
b. Change directory to the directory that contains the deployer. For example:
# cd /opt/IBM/proviso/deployer
c. Enter the following command:
# ./deployer.bin -Daction=resume
If you use the resume option, you can resume the installation exactly where you
left off.
Note: If you are asked to select a topology file to resume your installation,
select the topology file that you saved before you begin the installation.
2. The deployer opens, displaying a welcome page. Click Next to continue.
3. Accept the default location of the base installation directory of database JDBC
driver:
Oracle /opt/oracle/product/12.1.0-client32
DB2 /jdbc/lib, /opt/db2/product/10.1.0.
Overview
A minimal deployment installation is used primarily for demonstration or
evaluation purposes, and installs the product on the smallest number of machines
possible, with minimal user input.
For detailed information, see Chapter 3, “Installing and configuring the prerequisite
software,” on page 67.
Note: Before you start the installation, verify that all the database tests have been
performed. Otherwise, the installation might fail. See Chapter 3, “Installing and
configuring the prerequisite software,” on page 67 for information about tnsping.
If you wish to specify a different day, you must change the FIRST_WEEK_DAY
parameter in the Database Registry using the dbRegEdit utility. This parameter can
only be changed when you first deploy the topology that installs your Tivoli
Netcool Performance Manager environment, and it must be changed BEFORE the
Database Channel is installed. For more information, see the Tivoli Netcool
Performance Manager Database Administration Guide.
Procedure
1. Log in as root.
2. Set and export the DISPLAY variable.
See “Setting up a remote X Window display” on page 70.
3. Set and export the BROWSER variable to point to your Web browser. For
example:
systems:
# BROWSER=/usr/bin/firefox
# export BROWSER
systems:
# BROWSER=/usr/mozilla/firefox/firefox
# export BROWSER
systems:
# BROWSER=/usr/bin/firefox
# export BROWSER
Note: The BROWSER command cannot include any spaces around the equal
sign.
4. Change directory to the directory where the launchpad resides.
systems:
# cd <DIST_DIR>/ proviso/SOLARIS
systems:
# cd <DIST_DIR>/proviso/AIX
systems:
# cd <DIST_DIR>/proviso/RHEL
<DIST_DIR> is the directory on the hard drive where you copied the contents of
the Tivoli Netcool Performance Manager distribution.
For more information see “Downloading the Tivoli Netcool Performance
Manager distribution to disk” on page 87.
5. Enter the following command to start the Launchpad:
# ./launchpad.sh
Procedure
1. On the launchpad, click the Install Tivoli Netcool Performance Manager for
Minimal Deployment option in the list of tasks, then click the Install Tivoli
Netcool Performance Manager 1.4.2 for Minimal Deployment link to start the
deployer.
Alternatively, you can start the deployer from the command line, as follows:
a. Log in as root.
b. Set and export your DISPLAY variable (see “Setting up a remote X
Window display” on page 70).
c. Change directory to the directory that contains the deployer:
systems:
# cd <DIST_DIR>/proviso/SOLARIS/Install/SOL10/deployer
systems:
# cd <DIST_DIR>/proviso/AIX/Install/deployer
systems:
# cd <DIST_DIR>/proviso/RHEL/Install/deployer
d. Enter the following command:
# ./deployer.bin -Daction=poc -DPrimary=true
2. The deployer opens, displaying a welcome page. Click Next to continue.
3. Accept the terms of the license agreement, then click Next.
4. Accept the default location of the base installation directory of the JDBC
driver:
/opt/oracle/product/12.1.0-client32/jdbc/lib)
/opt/db2/product/10.1.0/java
Base
The base directory for the Oracle installation (for example,
/opt/oracle). Accept the provided path or click Choose to navigate to
another directory.
Database Home
The root directory of the database /opt/oracle/product/12.1.0-
client32)
Procedure
1. Starts the DataChannel.
2. Starts the DataLoad SNMP Collector, if it is not already running.
3. Creates a DataView user named tnpm.
4. Gives the poc user permission to view reports under the NOC Reporting group,
with the default password of tnpm.
Results
Next steps
The steps to be performed following the deployment of your system.
When the installation is complete, you are ready to perform the final configuration
tasks that enable you to view reports on the health of your network. These steps
are documented in detail in the Tivoli Netcool Performance Manager
documentation set.
For each new SNMP DataLoad, change the env file of the Tivoli Netcool
Performance Manager user to add the directory with the openssh libcrypto.so to
the LD_LIBRARY_PATH (or LIBPATH).
Before beginning the installation, you must download both the Technology Pack
Installer and the MIB-II jar files.
Procedure
v The product distribution site: https://www-112.ibm.com/software/howtobuy/
softwareandservices
Located on the product distribution site are the ProvisoPackInstaller.jar file,
the bundled jar file, and individual stand-alone technology pack jar files.
v (Optional) The Tivoli Netcool Performance Manager CD distribution, which
contains the ProvisoPackInstaller.jar file and the jar files for the Starter Kit
components.
See your IBM customer representative for more information about obtaining
software.
Note: You must run the updated topology through the deployer in order for your
changes to take effect.
The only supported migration path is from Solaris to AIX. There is no support for
any other platform migration scenario.
You retrieve the topology, modify it, then pass the updated data to the deployer.
When the installation is complete, the deployer stores the revised topology data in
the database.
Procedure
1. If it is not already open, open the Topology Editor (see “Starting the Topology
Editor” on page 154).
2. In the Topology Editor, select Topology > Open existing topology. The Open
Topology window is displayed.
3. For the topology source, select and click Next.
4. Verify that all of the fields for the database connection are filled in with the
correct values:
Database hostname
The name of the database host. The default value is localhost.
Port
The port number used for communication with the database. The
default value is 1521.
The port number used for communication with the database. The
default value is 60000.
Database user
The user name used to access the database. The default value is
PV_INSTALL
Database Password
The password for the database user account. For example, PV.
DB Name
Results
The topology is retrieved from the database and is displayed in the Topology
Editor.
Note: that if you add a collector to a topology that has already been
deployed, you must manually bounce the DataChannel management
components (cnsw, logw, cmgrw, amgrw). For more information, see “Manually
starting the Channel Manager programs” on page 244.
v “Creating and adding multiple SNMP collectors” on page 168
v “Adding a Discovery Server” on page 160
6. The new component is displayed in the Logical view of the Topology Editor.
7. Save the updated topology. You must save the topology after you add the
component and before you run the deployer. This step is not optional.
8. Run the deployer (see “Starting the Deployer” on page 172), passing the
updated topology as input.
The deployer can determine that most of the components described in the
topology are already installed, and installs only the new component.
9. When the installation ends successfully, the deployer uploads the updated
topology into the database.
For information about removing a component from the Tivoli Netcool
Performance Manager environment, see “Removing a component from the
topology” on page 225.
Example
In this example, you update the installed version of Tivoli Netcool Performance
Manager to add a new DataChannel and two SNMP DataLoaders to the existing
system.
You set the configuration information using the Topology Editor. As with the other
components, if you make changes to the configuration values, you must pass the
updated topology data to the deployer to have the changes propagated to both the
environment and the database.
Note: After the updated configuration has been stored in the database, you must
manually start, stop, or bounce the affected DataChannel component to have your
changes take effect.
You can move all components between hosts when they have not yet been
installed and are in the configured state. You can move SNMP and UBA collectors
when they are in the configured state or after they have been deployed and are in
the installed state.
If the component in the topology has not yet been deployed and is in the
configured state, the Topology Editor provides a Change Host option in the
pop-up menu when you click the component name in the Logical view. This
option allows you to change the host associated with the component prior to
deployment.
If the component is an SNMP or UBA collector that was previously deployed and
is in the installed state, the Topology Editor provides a Migrate option in the
For instructions on moving deployed SNMP and UBA collectors after deployment,
see “Moving a deployed collector to a different host.” For instructions on moving
components that have not yet been deployed, see the information below.
Procedure
1. Start the Topology Editor (if it is not already running) and open the topology
that includes the component's current host (see “Starting the Topology Editor”
on page 154 and “Opening a deployed topology” on page 187).
2. In the Logical view, navigate to the name of the component to move.
3. Right-click the component name, then click Change Host from the pop-up
menu.
The Migrate Component dialog appears, containing a drop-down list of hosts
where you can move the component.
4. Select the name of the new host from the list, then click Finish.
The name of the new host appears in the Properties tab.
After you move a collector to a new host, it may take up to an hour for the change
to be registered in the database.
Note: To avoid the loss of collected data, leave the collector running on the
original host until you complete Step 7 on page 108.
To move a deployed SNMP collector to a different host then follow these steps:
Procedure
1. Start the Topology Editor (if it is not already running) and open the topology
that includes the collector's current host (see “Starting the Topology Editor” on
page 154 and “Opening a deployed topology” on page 187).
2. In the Logical view, navigate to the name of the collector to move. For example
if moving SNMP 1.1, navigate as follows:
DataChannels > DataChannel 1 > Collector 1.1 > Collector SNMP.1.1
3. Right-click the collector name (for example, Collector SNMP 1.1), then click
Migrate from the pop-up menu.
Note: If you are moving a collector that has not been deployed, select Change
host from the pop-up menu (Migrate is grayed out). After the Migrate Collector
dialog appears, continue with the steps below.
4. Select the name of the new host from the list, then click Finish.
In the Physical view, the status of the collector on the new host is Configured.
The status of the collector on the original host is To be uninstalled. You will
remove the collector from the original host in Step 9.
Note: If you are migrating a collector that has not been deployed, the name of
the original host is automatically removed from the Physical view.
5. Click Topology > Save Topology to save the topology data.
6. Click Run > Run Deployer for Installation to run the deployer, passing the
updated topology as input. For more information on running the deployer, see
“Starting the Deployer” on page 172.
The deployer installs the collector on the new host and starts it.
Note: Both collectors are now collecting data - the original collector on the
original host, and the new collector on the new host.
7. Before continuing with the steps below, note the current time, and wait until a
time period equivalent to two of the collector's collection periods elapses.
Doing so guards against data loss between collections on the original host and
the start of collections on the new host.
Because data collection on the new host is likely to begin sometime after the
first collection period begins, the data collected during the first collection
period will likely be incomplete. By waiting for two collection time periods to
elapse, you can be confident that data for one full collection period will be
collected.
The default collection period is 15 to 30 minutes. You can find the collection
period for the sub-element, sub-element group, or collection formula associated
with the collector in the DataMart Request Editor. For information on viewing
and setting a collection period, see the Configuring and Operating DataMart.
8. Bounce the FTE for the collector on the collector's new host, as in the following
example:
./dccmd bounce FTE.1.1
The FTE now recognizes the collector's configuration on the new host, and will
begin retrieving data from the collector's output directory on the new host.
9. In the current Topology Editor session, click Run > Run Deployer for
Uninstallation. Remove the collector from original host and save the topology.
Run deployer for Uninstallation.
Note: This step is not necessary if you are moving a collector that has not been
deployed.
To move a deployed SNMP collector to a different host then follow these steps:
Note: If you are moving a collector that has not been deployed, select Change
host from the pop-up menu (Migrate is grayed out). After the Migrate Collector
dialog appears, continue with the steps below.
4. Select the name of new host from list, and then click Finish. In the Physical
view, the status of collector on the new host is configured.
5. Click Topology > Save Topology to save the topology data.
6. Click Run > Run Deployer for Installation to run the deployer, passing the
updated topology as input. For more information on running the deployer, see
“Starting the Deployer” on page 172.
The deployer installs the collector on the new host and starts it.
Note: Both collectors are now collecting data - the original collector on the
original host, and the new collector on the new host.
7. Bounce the FTE for the collector on the collector's new host, as in the following
example:
./dccmd bounce FTE.1.1
The FTE now recognizes the collector's configuration on the new host, and will
begin retrieving data from the collector's output directory on the new host.
Note: There is no need to click Run Deployer for Uninstallation in this case.
Procedure
1. Move the collector as described in “Moving a deployed SNMP collector
(scenario 1)” on page 191.
Note: If you are moving a spare collector out of the HAM environment, the
navigation path is different than the path shown in Step 2 of the above
instructions. For example, suppose you have a single HAM environment with a
cluster MyCluster on host MyHost, and you are moving the second SNMP
spare out of the HAM. The navigation path to the spare would be as follows:
Note: You cannot move BCOL collectors, or UBA collectors that have a BLB or
QCIF subcomponent. If you want to move a UBA collector that has these
subcomponents, you must manually remove it from the old host in the topology
and then add it to the new host.
Procedure
1. Log in as pvuser to the DataChannel host where the UBA collector is running.
2. Change to the directory where DataChannel is installed. For example:
cd /opt/datachannel
3. Source the DataChannel environment:
. dataChannel.env
4. Stop the collector's UBA and FTE components. For example, to stop these
components for UBA collector 1.1, run the following commands:
dccmd stop UBA.1.1
and...
dccmd stop FTE.1.1
Note: This step is not necessary if the collector's current host and the new
host share a file system.
tar -cvf UBA_1_1.tar ./UBA.1.1/*
Note: If the UBA collector was the only DataChannel component on the
original host, the collector will be listed under that host,
and its status will be "To be uninstalled." You can remove the DataChannel
installation from the original host after you finish the steps below. For
information on removing DataChannel from the host, see “Removing a
component from the topology” on page 225.
10. Click Topology > Save Topology to save the topology.
11. Click Run > Run Deployer for Installation to run the deployer, passing the
updated topology as input. For more information on running the deployer, see
“Starting the Deployer” on page 172.
If DataChannel is not already installed on the new host, this step installs it.
12. Click Run > Run Deployer for Uninstallation to remove the collector from
the original host, passing the updated topology as input. For more
information, see “Removing a component from the topology” on page 225.
13. Copy any directory you tarred in Step 5 and the associated JavaScript files to
the new host.
Note: This step is not necessary if the collector's original host and the new
host share a file system.
Note: If AMGR is not running on the new host, do not continue. Verify that
you have performed the preceding steps correctly.
18. Start the collector's UBA and FTE components on the new host. For example,
to start these components for collector 1.1, run the following commands:
./dccmd start UBA.1.1
and...
Note: If any pack-specific components were shut down on the old host (see
Step 4), you must also start those components on the new host.
Procedure
1. Start the Topology Editor (if it is not already running) and open the topology
(see “Starting the Topology Editor” on page 154 and “Opening a deployed
topology” on page 187).
2. In the Logical view, navigate to the collector.
3. Highlight the collector to view its properties.
The Topology Editor displays both the collector core parameters and the
technology pack-specific parameters.
4. Edit the port parameter within the list, SERVICE_PORT, then click Finish.
5. Click Topology > Save Topology to save the topology data.
6. Click Run > Run Deployer for Installation to run the deployer, passing the
updated topology as input.
7. When deployment is complete, log onto the server hosting the collector.
8. Log in as Tivoli Netcool Performance Manager UNIX user, pvuser, on the
collector's host.
9. Change to the directory where DataLoad is installed. For example:
cd /opt/dataload
10. Source the DataLoad environment:
. ./dataLoad.env
11. Stop the SNMP collector:
pvmdmgr stop
12. Edit the file dataLoad.env and set the field DL_ADMIN_TCP_PORT.
For example:
DL_ADMIN_TCP_PORT=8800
13. Source the DataLoad environment again:
. ./dataLoad.env
14. Start the SNMP collector:
pvmdmgr start
The Dashboard Application Services Hub specific ports defined and used to build
the topology.xml file are as follows:
WAS_WC_defaulthost 16310
COGNOS_CONTENT_DATABASE_PORT 1557
IAGLOBAL_LDAP_PORT 389
Procedure
1. Create a properties file containing values such as host name that match your
environment. The exemplary properties file below uses default values. Modify
the values to match your environment. Save the file in any location.
WAS_HOME=C:/IBM/JazzSM
was.install.root=C:/IBM/JazzSM
profileName=DASHProfile
profilePath=C:/IBM/JazzSM/profiles
templatePath=C:/IBM/JazzSM/profileTemplates/default
nodeName=DASHNode
cellName=DASHCell
hostName=your_JazzSM_host
portsFile=C:/IBM/JazzSM/properties/DASHPortDef.properties
2. Edit the JazzSM_install_dir/properties/DASHPortDef.properties file to
contain the desired port numbers.
3. Stop the Jazz for Service Management server by navigating to the directory in
the command-line interface. For more information, see
Stopping Jazz for Service Management application servers.
Important: To stop the server, you must log in with the same user that you
used to install Tivoli Common Reporting.
4. In the command-line interface, navigate to the TCR_install_dir/bin directory.
5. Run the following command:
ws_ant.bat -propertyfile C:/temp/tcrwas.props
-file "C:/IBM/JazzSM/profileTemplates/default/actions/updatePorts.ant"
TCR_component_dir\cognos\bin\tcr_cogconfig.bat
TCR_install_dir/cognos/bin/tcr_cogconfig.sh
b. In the Environment section, change the port numbers to the desired values,
as in Step 2.
c. Save your settings and close IBM Cognos Configuration.
Important: To start the server, you must log in with the same user that you
used to install Jazz for Service Management.
Port assignments
The application server requires a set of sequentially numbered ports.
The sequence of ports is supplied during installation in the response file. The
installer checks that the number of required ports (starting with the initial port
value) are available before assigning them. If one of the ports in the sequence is
already in use, the installer automatically terminates the installation process and
you must specify a different range of ports in the response file.
The profile of the application server is available as a text file on the computer
where it is installed.
Procedure
1. Locate the /opt/IBM/JazzSM/profile/logs directory.
2. Open AboutThisProfile.txt in a text editor.
Example
Overview
The High Availability Manager (HAM) is an optional component for large
installations that want to use redundant SNMP collection paths.
The HAM constantly monitors the availability of one or more SNMP collection
hosts, and switches collection to a backup host (called a spare) if a primary host
becomes unavailable.
The following figure shows a simple HAM configuration with one primary host
and one spare. In the panel on the left, the primary host is operating normally.
SNMP data is being collected from the network and channeled to the primary host.
In the panel on the right, the HAM has detected that the primary host is
unavailable, so it dynamically unbinds the collection path from the primary host
and binds it to the spare.
A collector has two basic parts: the collector process running on the host computer,
and the collector profile that defines the collector's properties.
A collector that is not part of a HAM environment is static - that is, the collector
process and the collector profile are inseparable. But in a HAM environment, the
collector process and collector profile are managed as separate entities. This means
that if a collector process is unavailable (due to a collector process crash or a host
machine outage), the HAM can dynamically reconfigure the collector, allowing
data collection to continue. The HAM does so by unbinding the collector profile
from the unavailable collector process on the primary host, and then binding the
collector profile to a collector process on a backup (spare) host.
Note: It may take several minutes for the HAM to reconfigure a collector,
depending on the amount of data being collected.
When you set up a HAM configuration in the Topology Editor, you manage the
two parts of a collector - the collector process and the collector profile - through
the following folders in the Logical view:
Collector Processes
A collector process is a UNIX process representing a runtime instance of a
collector. A collector process is identified by the name of the host where
the process is running and by the collector process port (typically, 3002). A
host can have just one SNMP collector process.
Managed Definitions
A managed definition identifies a collector profile through the unique
collector number defined in the profile.
Every managed definition has a default binding to a host and to the
collector process on that host. The default host and collector process are
called the managed definition's primary host and collector process. A host
that you designate as a spare host has a collector process but no default
managed definition.
The following figure shows the parts of a collector that you manage through the
Collector Process and Managed Definition folders. In the figure, the HAM
dynamically unbinds the collector profile from the collector process on the primary
host, and then binds the profile to the collector process on the spare. This dynamic
re-binding of the collector is accomplished when the HAM binds the managed
definition - in this case, represented by the unique collector ID, Collector 1 - to the
collector process on the spare.
A cluster is a logical grouping of hosts and collector processes that are managed by
a HAM.
The use of multiple clusters is optional. Whether you use multiple clusters or just
one has no affect on the operation of the HAM. Clusters simply give you a way to
separate one group of collectors from another, so that you can better deploy and
manage your primary and spare collectors in a way that is appropriate for your
needs.
Multiple clusters may be useful if you have a large number of SNMP collector
hosts to manage, or if the hosts are located in various geographic areas.
The clusters in a given HAM environment are distinct from one another. In other
words, the HAM cannot bind a managed definition in one cluster to a collector
process in another.
The cluster can have as few as two hosts - one primary and one spare. Or, it can
have multiple primary hosts with one or more spares ready to replace primary
hosts that become unavailable.
When the HAM binds a managed definition to a spare (either a designated spare
or a floating spare), the spare becomes an active component of the collector. It
remains so unless you explicitly reassign the managed definition back to its
primary host or to another available host in the HAM cluster. This is an important
fact to consider when you plan the hosts to include in a HAM cluster.
Note: IBM recommends that all the primaries in a cluster be of the same
type - either all floating spares or no floating spares.
1 + 1, fixed spare
A fixed spare cluster with one primary host and one designated spare.
The figure below shows a fixed spare cluster with one primary host and one
designated spare:
v In the panel on the left, Primary1 is functioning normally. The designated spare
is idle.
v In the panel on the right, Primary1 experiences an outage. The HAM unbinds
the collector from Primary1 and binds it to the designated spare.
v With the spare in use and no other spares in the HAM cluster, failover can no
longer occur - even after Primary1 returns to service. For failover to be possible
again, you must reassign Collector 1 to Primary1. This idles the collector process
on the spare, making it available for the next failover operation if Primary 1 fails
again.
Note: When a designated spare serves as the only spare for a single primary, as in
a 1+1 fixed spare cluster, the HAM pre-loads the primary's collector definition on
the spare. This results in a fast failover with a likely loss of no more than one
collection cycle.
The following table shows the bindings that the HAM can and cannot make in this
cluster:
Designated spare
The figure below shows a fixed spare cluster with two primary hosts and one
designated spare:
v In the panel on the left, Primary1 and Primary2 are functioning normally. The
designated spare is idle.
v In the panel on the right, Primary2 experiences an outage. The HAM unbinds
the collector from Primary2 and binds it to the designated spare.
v With the spare in use and no other spares in the HAM cluster, failover can no
longer occur - even after Primary2 returns to service. For failover to be possible
again, you must reassign Collector 2 to Primary2. This idles the collector process
on the spare, making it available for the next failover operation.
The following table shows the bindings that the HAM can and cannot make in this
cluster:
Designated spare
Collector 2 Primary2 (default binding) Primary1
Designated spare
Note: Due to multi collector functionality, we can also add Collector 1 and
Collector 2 on the same Primary host.
The figure below shows a floating spare cluster with two primary hosts and one
designated spare, with each primary configured as a floating spare:
v In the panel on the left, Primary1 and Primary2 are functioning normally. The
designated spare is idle.
v In the panel on the right, Primary2 experiences an outage. The HAM unbinds
the collector from Primary2 and binds it to the designated spare.
v When Primary2 returns to service, it will assume the role of spare, meaning its
collector process remains idle. The host originally defined as the dedicated spare
continues as the active platform for Collector 2.
The following table shows the bindings that the HAM can and cannot make in this
cluster:
Primary2
Designated spare
Collector 2 Primary1 -
Designated spare
3+ 2, fixed spares
A fixed spare cluster with three primary hosts and two designated spares.
The figure below shows a fixed spare cluster with three primary hosts and two
designated spares:
v In the panel on the left, all three primaries are functioning normally. The
designated spares are idle.
Note: Each managed definition sets its own failover priority. Failover priority
can be defined differently in different managed definitions.
v With one spare in use and one other spare available (Designated Spare 1),
failover is now limited to the one available spare - even after Primary3 returns
to service. For dual failover to be possible again, you must reassign Collector 3
to Primary3.
The following table shows the bindings that the HAM can and cannot make in this
cluster:
Designated Spare 2
Collector 2 Primary2 (default binding) Primary1
Designated Spare 2
Collector 3 Primary3 (default binding) Primary1
Designated Spare 2
The following table shows the bindings that the HAM can and cannot make in this
cluster:
Primary2
Primary3
Designated Spare 1
Designated Spare 2
Collector 2 Primary1 -
Primary3
Designated Spare 1
Designated Spare 2
Primary2
Designated Spare 1
Designated Spare 2
The SNMP collector is state-based and designed both to perform initialization and
termination actions, and to "change state" in response to events generated by the
HAM or as a result of internally-generated events (like a timeout, for example).
The following table lists the events that the SNMP collector understands and
indicates whether they can be generated by the HAM.
The SNMP collector can reside in one of the following states, as shown in the
following table:
The following state diagram shows how the SNMP collector transitions through its
various states depending upon events or time-outs:
How failover works with the HAM and the SNMP collector
How Failover Works With the HAM and the SNMP Collector
The following tables illustrate how the HAM communicates with the SNMP
collectors during failover for a 1+1 cluster and a 2+1 cluster.
Because more than one physical system may produce SNMP collections, the File
Transfer Engine (FTE) must check every capable system for a specific profile. The
FTE retrieves all output for the specific profile. Any duplicated collections are
reconciled by the Complex Metrics Engine (CME).
To obtain status on the SNMP collectors managed by the HAM, enter the following
command on the command line:
$ dccmd status HAM.<hostname>.1
For a 1-to-1 failover configuration, the dccmd command might return output like
the following:
$ dccmd status HAM.SERVER.1
COMPONENT APPLICATION HOST STATUS ES DURATION EXTENDED STATUS
HAM.SERVER.1 HAM SERVER running 10010 1.1 Ok: (box1:3002 ->
Running 1.1 for 5h2m26s); 1 avail spare: (box2:3002 -> Ready 1.1)
This preceding output shows that Collector 1.1 is in a Running state on Box1, and
that the Collector on Box2 is in a Ready state, with the profile for Collector 1.1
loaded.
This is just one of the many variations a HAM environment can have. The
procedures described in the following sections indicate the specific steps where
you can vary the configuration.
Procedure
1. Install all collectors.
2. Configure and start the HAM.
3. Install all technology packs.
4. Perform the discovery.
A 3+1 HAM cluster requires that you have a topology with the following
minimum components:
v Three hosts, each bound to an SNMP collector. These act as the primary hosts.
You can create a managed definition for each of the primary hosts.
v One additional host that is not bound to an SNMP collector. This acts as the
designated spare.
Procedures
The general procedures for creating a single-cluster HAM with one designated
spare and three floating spares.
Procedure
1. Start the Topology Editor (if it is not already running) and open the topology
where you want to add the HAM (see “Starting the Topology Editor” on page
154 and “Opening a deployed topology” on page 187).
2. In the Logical view, right-click High Availability Managers, located at
DataChannels > Administrative Components.
3. Select Add High Availability Manager from the pop-up menu.
The Add High Availability Manager Wizard appears.
4. In the Available hosts field, select the host where you want to add the HAM.
Note: You can install the HAM on a host where a collector process is installed,
but you cannot install more than one HAM on a host.
5. In the Identifier field, accept the default identifier.
The identifier has the following format:
HAM.<HostName>.< n >
where HostName is the name of the host you selected in Step 4, and n is a
HAM-assigned sequential number, beginning with 1, that uniquely identifies
this HAM from others that may be defined on other hosts.
6. Click Finish.
The HAM identifier appears under the High Availability Managers folder.
7. Right-click the identifier of the HAM you just created.
8. Select Add Cluster from the pop-up menu.
The Add Cluster Monitor Wizard appears.
9. In the Identifier field, type a name for the cluster and click Finish.
The cluster name appears under the HAM identifier folder you added in
Step 6. The following folders appear under the cluster name:
v Collector Processes
v Managed Definitions
Procedure
1. In the Logical view, right-click the Collector Processes folder that you created
in Step 9 of the previous section, “Create the HAM and a HAM cluster” on
page 215.
2. Select Add Collection Process SNMP Spare from the menu.
The Add Collection Process SNMP Spare - Configure Collector Process SNMP
Spare dialog is diaplayed.
3. In the Available hosts field, select the host that you want to make the
designated spare.
4. The Port field, specifies the next available port number, for the spare's collector
process. You can modify the port number. then click Finish.
Under the cluster's Collector Processes folder, the entry Collection Process
SNMP Spare <n > appears, where n is a HAM-assigned sequential number,
beginning with 1, that uniquely identifies this designated spare from others
that may be defined in this cluster.
Note: Repeat Step 1 through Step 4 to add an extra designated spare to the
cluster.
What to do next
If you be make changes to an already existing configuration, ensure the
dataLoad.env file contains all the right settings:
1. Change to the directory where DataLoad is installed. For example:
cd /opt/dataload
2. Source the DataLoad environment:
. ./dataLoad.env
3. Make sure that DL_HA_MODE field in the dataLoad.env file and set to
DL_HA_MODE=true.
4. Make sure that DL_HA_MODE_1 field in the dataLoad.env file and set to
DL_HA_MODE_1=true.
5. Source the DataLoad environment again:
. ./dataLoad.env
Note: When you add a managed definition to a HAM cluster, the associated
collector process is automatically added to the cluster's Collector Processes folder.
Procedure
1. In the Logical view, right-click the Managed Definitions folder that you
created in “Create the HAM and a HAM cluster” on page 215.
2. Select Add Managed Definition from the pop-up menu.
The Add Managed Definition - Choose Managed Definition dialog appears.
3. In the Collector number field, select the unique collector number to associate
with this managed definition.
4. Click Finish.
The following entries now appear for the cluster:
v Under the cluster's Managed Definitions folder, the entry Managed
Definition < n > appears, where n is the collector number you selected in
Step 3.
v Under the cluster's Collector Processes folder, the entry Collector Process
[HostName] appears, where HostName is the host that will be bound to the
SNMP collector you selected in Step 3. This host is the managed definition's
primary host.
Note: Repeat Step 1 though to Step 4 to add another managed definition to the
cluster.
Example
When you finish adding managed definitions for a 3+1 HAM cluster, the Logical
and Physical views might look like the following:
When you create a managed definition, the managed definition's primary host is
the only host in its resource pool. To enable the HAM to bind a managed
definition to other hosts, you must add more hosts to the managed definition's
resource pool.
Procedure
1. Right-click a managed definition in the cluster's Managed Definitions folder.
2. Select Configure Managed Definition from the pop-up menu.
The Configure Managed Definition - Collector Process Selection dialog appears,
as shown below. In this example, the resource pool being configured is for
Managed Definition 1 (that is, the managed definition associated with Collector
1).
Note: You must add at least one of the hosts in the Additional Collector
Processes list to the resource pool.
Since the goal in this example is to configure all primaries as floating spares,
the designated spare and the two primaries (docserver1 and dcsol1a) will be
added to the resource pool.
4. When finished checking the hosts to add to the resource pool, click Next.
Note: If you add just one host to the resource pool, the Next button is not
enabled. Click Finish to complete the definition of this resource pool. Return to
Step 1 to define a resource pool for the next managed definition in the cluster,
or skip to “Save and start the HAM” on page 220 if you are finished defining
resource pools.
The Configure Managed Definition - Collector Process Order dialog appears, as
shown below:
Procedure
1. Click Topology > Save Topology to save the topology file containing the HAM
configuration.
2. Run the deployer (see “Starting the Deployer” on page 172), passing the
updated topology file as input.
3. Open a terminal window on the DataChannel host.
4. Log in as pvuser.
5. Change your working directory to the DataChannel bin directory
(/opt/datachannel/bin by default), as follows:
cd /opt/datachannel/bin
6. Bounce (stop and restart) the Channel Manager. For instructions, see Step 15 on
page 111.
7. Run the following command:
dccmd start ham
Typically, one HAM is sufficient to manage all the collectors you require in your
HAM environment. But for performance reasons, very large Tivoli Netcool
Performance Manager deployments involving dozens or hundreds of collector
processes might benefit from more than one HAM environment.
HAM environments are completely separate from one another. A host in one HAM
environment cannot fail over to a host in another HAM environment.
You can also modify the configuration parameters of the HAM components that
are writable. For information on modifying configuration parameters, see
“Changing configuration parameters of existing Tivoli Netcool Performance
Manager components” on page 190.
You can remove HAM components from the environment by right-clicking the
component name and selecting Remove from the pop-up menu. The selected
component and any subcomponents will be removed. When you remove SNMP
spare from a host, ensure that it is a SNMP component and check if it is the last
(SNMP and SNMP Spare) on the host.
If the collector is the last SNMP collector on that host:
1. Click Remove in the Logical View and save the topology.
2. Click Run > Run Deployer for Uninstallation.
If the collector is not the last SNMP collector on original host:
3. Click Remove in the Logical View and save the topology.
4. Click Run > Run Deployer for Installation.
5. Set the DL_HA_MODE field in the dataLoad.env file to FALSE.
6. Source the DataLoad environment again:
Before you can remove a designated spare (Collection Process SNMP Spare), you
must remove the spare from any resource pools it may belong to. To remove a
designated spare from a resource pool, open the managed definition that contains
the resource pool, and clear the check box next to the name of the designated spare
to remove. For information about managing resource pools, see “Define the
resource pools” on page 218
If you change the configuration of a HAM or any HAM components, or if you add
or remove an existing collector to or from a HAM environment, you must bounce
(stop and restart) the Tivoli Netcool Performance Manager components you
changed The is generally true for all Tivoli Netcool Performance Manager
components that you change, not just HAM.
To bounce a component:
Procedure
1. Open a terminal window on the DataChannel host.
2. Log in as pvuser.
3. Change your working directory to the DataChannel bin directory
(/opt/datachannel/bin by default), as follows:
cd /opt/datachannel/bin
4. Run the bounce command in the following format:
dccmd bounce <component>
For example:
v To bounce the HAM with the identifier HAM.dcsol1b.1, run:
dccmd bounce ham.dcsol1b.1
v To bounce all HAMs in the topology, run:
dccmd bounce ham.*.*
v To bounce the FTE for collector 1.1 that is managed by a HAM, run:
dccmd bounce fte.1.1
You do not need to bounce the HAM that the FTE and collector are in.
For information on using dccmd, see Tivoli Netcool Performance Manager
Command Line Interface.
5. Bounce the Channel Manager. For instructions, see Step 15.
Procedure
1. Right-click the collector process or managed definition to view.
2. Select Show from the menu.
The Show Collector Process... or Show Managed Definition... dialog is
displayed. The following sections describe the contents of these dialogs.
The following figure shows a collector process configured with three managed
definitions.
The Show Managed Definition... dialog contains the resource pool for a particular
managed definition.
This dialog contains the same information that appears in the Show Collector
Process... dialog, but for multiple hosts instead of just one. As such, this dialog
gives you a broader view of the cluster's configuration than a Show Managed
Definition... dialog.
The following figure shows a managed definition's resource pool configured with
four hosts:
When you perform an uninstall, the "uninstaller" is the same deployer used to
install Tivoli Netcool Performance Manager.
You might have a situation where you have modified a topology by both adding
new components and removing components (marking them "To Be Removed").
However, the deployer can work in only one mode at a time - installation mode or
uninstallation mode. In this situation, first run the deployer in uninstallation mode,
then run it again in installation mode, except when you are uninstalling dataload.
See, “Restrictions and behavior” for Dataload restrictions.
Note: After the deployer has completed an uninstall, you must open the topology
(loaded from the database) in the Topology Editor before performing any
additional operations.
Note: After the components are marked for deletion, the topology must be
consumed by the deployer to propagate the required changes and load the
updated file in the database. When you open the database version of the topology,
the "removed" component disappears from the topology.
To remove one or more components from the topology where the host system no
longer exists or is unreachable on the network, do the following steps:
Note: After the manual uninstall is completed, the DataView instance will remain
in the topology after the uninstall operation completes, this is not usually the case
for uninstalled components.
DataLoad restrictions
v If you are removing SNMP/Spare collectors that are installed on host, such that
it is not the last SNMP/Spare collectors, then after you make changes to the
Topology, save it and always Run the deployer for Installation option.
v If you are removing the last SNMP/Spare collector, or all the SNMP/Spare
collectors on a host, then after you make changes to the Topology, save it and
always Run the deployer for Uninstallation option.
v You must not remove all SNMP/Spare collectors on a host and add
SNMP/Spare collectors to the same host in one step. If you want to remove and
add collectors on a host, first remove all collectors on the host, and then Save
topology. Run for Uninstallation and reopen topology from database. Add
collectors on that host, Save topology, and then Run for Installation.
DataChannel restrictions
v You can remove the DataChannel Administrative Component only after all the
DataChannels have been removed.
v If you are uninstalling a DataChannel component, the component must first be
stopped. If you are uninstalling all DataChannel components on a host, then you
must remove the DataChannel entries from the crontab.
v If you delete a DataChannel or collector, the working directories (such as the
FTE and CME) are not removed; you must delete these directories manually.
v When a Cross-Collector CME (CC-CME) is installed on the system and formulas
are applied against it, the removal of collectors that the CC-CME depends on is
not supported. It is an exceptional case, that is, if you have not installed a
CC-CME, collectors can be removed.
DataView restrictions
Uninstall DataView manually if other products are installed in the same Dashboard
Application Services Hub instance. Use the following procedure to uninstall a
DataView component:
1. Run the uninstall command:
/opt/IBM/JazzSM/products/tnpm/dataview/bin/uninstall.sh /opt/IBM/JazzSM
<DASH_administrator_username> <DASH_administrator_password>
2. Remove the DataView directory:
rm -rf /opt/IBM/JazzSM/products/tnpm/dataview
Important: Delete all the existing Tivoli Netcool/OMNIbus Web GUI integration
dashboard pages before you run the uninstallation script. Otherwise, the
uninstallation might fail with the following error message:
GYMVC0001E: Error occurred: com.ibm.ws.scripting.ScriptingException:
com.ibm.ws.scripting.ScriptingException: WASX7418E: Application update for isc failed:
see previous messages for details.
Removing a component
To remove component from the topology.
Procedure
1. If it is not already open, open the Topology Editor (see “Starting the Topology
Editor” on page 154).
2. Open the existing topology (see “Opening a deployed topology” on page 187).
3. In the Logical view of the Topology Editor, right-click the component you want
to delete and select Remove from the pop-up menu.
4. The editor marks the component as To Be Removed and removes it from the
display.
5. Save the updated topology.
6. Run the deployer (see “Starting the Deployer” on page 172).
Note: If you forgot to save the modified topology, the deployer will prompt
you to save it first.
The deployer can determine that most of the components described in the
topology file are already installed, and removes the component that is no
longer part of the topology.
7. The deployer displays the installation steps page, which lists the steps required
to remove the component. Note that the name of the component to be removed
includes the suffix "R" (for "Remove"). For example, if you are deleting a
DataChannel, the listed component is DCR.
8. Click Run All to run the steps needed to delete the component.
9. When the installation ends successfully, the deployer uploads the updated
topology file into the database. Click Done to close the wizard.
Note: If you remove a component and redeploy the file, the Topology Editor
view is not refreshed automatically. Reload the topology file from the database
to view the updated topology.
What to do next
Procedure
1. If it is not already open, open the Topology Editor (see “Starting the Topology
Editor” on page 154).
2. Open the existing topology (see “Opening a deployed topology” on page 187).
3. In the Logical view of the Topology Editor, right-click the SNMP collector or
spare collector you want to delete and select Remove from the pop-up menu.
4. Repeat step 2 for all SNMP collectors and spare collectors on the host that you
want to remove.
5. Save the updated topology.
If there are no more SNMP or Spare collectors remaining on host:
6. Save Topology, and then click Run > Run Deployer for Uninstallation.
If there are SNMP or Spare collectors installed on host even after removing
some collectors:
7. Save Topology, and then click Run > Run Deployer for Installation.
To uninstall Tivoli Netcool Performance Manager, you must have the CD or the
original electronic image. The uninstaller will prompt you for the location of the
image.
Order of uninstall
The order in which you must uninstall components.
For all deployments, you must use the Topology Editor to uninstall the Tivoli
Netcool Performance Manager components in the following order:
Procedure
1. DataLoad and DataChannel
When uninstalling DataChannel from a host, you must run ./dccmd stop all,
disable, or delete the dataload cron processes and manually stop (kill -9) any
running channel processes (identified by running findvisual). For more
information about the findvisual command, see Appendix B, “DataChannel
architecture,” on page 239
2. DataMart
3. DataChannel Administrative Components and any remaining DataChannel
components.
4. DataView
5. Tivoli Netcool Performance Manager Database (remove only after all the other
components have been removed). The database determines the operating
platform of the Tivoli Netcool Performance Manager environment.
Note: An error message is seen when user clicks Done, to close the deployer
wizard.
User can ignore the error, click OK to proceed to close the deployer wizard.
Note: When you reboot your server, the contents of /tmp might get cleaned out.
v When you run the uninstaller, it finds the components that are marked as
"Installed", marks them as "To Be Removed", then deletes them in order. The
deployer is able to determine the correct steps to be performed. However, if the
component is not in the Installed state (for example, the component was not
started), the Topology Editor deletes the component from the topology - not the
uninstaller.
v When the uninstallation is complete, some data files still remain on the disk. You
must remove these files manually. See “Removing residual files” on page 231 for
the list of files that must be deleted manually.
v If you are not removing the last collector on a host, it is recommended that you
first remove the other components on the host. Save topology. Run for
Uninstallation. To remove collectors, see, “Removing multiple collectors” on
page 228.
v - /opt/oracle/product/12.1.0-client32/jdbc/lib)
v - /opt/db2/product/10.1.0/java
Or click Choose to browse to another directory. Click Next to continue. A
dialog opens, asking whether you want to load a topology from disk.
4. Click No.
A dialog box opens asking for you to enter the details of the updated topology
file.
5. Enter the name of the topology file you updated as described in the "Before
you begin" section.
6. The database access window prompts for the security credentials. Enter the
host name (for example, delphi) and database administrator password (for
example, PV), and verify the other values (port number, SID or DB_NAME, and
user name). Click Next to continue. The topology as stored in the database is
then compared with the topology loaded from the file.
7. The uninstaller displays several status messages, and then displays a message
stating that the environment status was successfully downloaded and saved to
the file /tmp/ProvisoConsumer/Discovery.xml. Click Next to continue.
8. Repeat the process on each system in the deployment.
Note: After the removal of each component by using the Topology Editor, you
must reload the topology from the Database.
Procedure
1. Log in as root.
2. Set and export your DISPLAY variable (see “Setting up a remote X Window
display” on page 70).
3. Change directory to the install_dir/uninstall directory. For example:
# cd /opt/IBM/proviso/uninstall
4. Enter the following command:
#./Uninstall_Topology_Editor
5. The uninstall wizard opens. Click Uninstall to uninstall the Topology Editor.
6. When the script is finished, click Done.
Note: Uninstalling Topology Editor will not remove the *.txt language files
from /opt/IBM/proviso/license. It needs to be manually removed.
When you uninstall Tivoli Netcool Performance Manager, some of the files remain
on the disk and must be removed manually. After you exit from the deployer (in
uninstall mode), you must delete these residual files and directories manually.
Procedure
1. Log in as database instance name. For example, oracle for Oracle and db2 for
DB2.
2. Enter the following commands to stop:
db2stop force
3. As root, enter the following commands to delete these files and directories:
where:
v $ORACLE_BASE: /opt/oracle
v $DB2_BASE: /opt/db2 and
v $ORACLE_HOME: /opt/oracle/product/12.1.0
v $DB2_HOME: /opt/db2/opt/db2/product/10.1.0
4. Enter the following commands to clear your database mount points and
remove any files in those directories:
Table 21. Commands to clear your Oracle or DB2 mount points
5. Enter the following command to delete the temporary area used by the
deployer:
rm -fr /tmp/ProvisoConsumer
6. Delete the installer file using the following command:
rm /var/.com*
7. Delete the startup file, netpvmd.
Note: The netpvmd startup and stop files are also present in /etc/rc2.d and
/etc/rc3.d as S99netpvmd and K99netpvmd. These files must also be removed.
Note: If you have uninstalled Tivoli Common Reporting on a remote host, thetcr
Clean.sh file is sent by using ftp to the remote host for execution.
A remote installation refers to installation on any host that is not the primary
deployer, that is, the host running the Topology Editor. For some systems, security
settings may not allow for components to be installed remotely. Before deploying
on such a system, you must be familiar with the information in this appendix.
There may arise situations where a remote host does not support FTP or the
remote execution of files.
A remote host may not support FTP or the remote execution of files.
Procedure
v Option 1:
1. Unselect the Remote Command Execution option during the installation. The
deployer creates and transfers the directory with the required component
package in it.
2. As root, log in to the remote system and manually run the run.sh script.
v Option 2:
Follow the directions outlined in “Installing on a remote host by using a
secondary deployer” on page 236.
For any remote host where neither FTP nor REXEC or RSH are possible the
deployment of the required component or components must be carried out using
the following steps.
Procedure
v Option 1:
1. Unselect the FTP option during the installation. The deployer creates a
directory containing the required component package.
2. Copy the required component directory to the target system.
3. As root, log in to the remote system and manually run the run.sh script.
v Option 2:
Follow the directions outlined in “Installing on a remote host by using a
secondary deployer.”
A secondary deployer is used when the host you want to install on does not
support remote installation.
The following steps describe how to install a Tivoli Netcool Performance Manager
component by using a secondary deployer. For the purpose of clarity, name the
primary deployer host as delphi, and the host on which you want to install a
component by using the secondary deployer that you can name corinth.
Procedure
1. Copy the Tivoli Netcool Performance Manager distribution to the server on
which you would like to set up the secondary deployer, that is, copy the
distribution to corinth. For more information about copying the Tivoli Netcool
Performance Manager distribution to a server, see “Downloading the Tivoli
Netcool Performance Manager distribution to disk” on page 87.
2. Open the Topology Editor on the primary deployer host, that is, on delphi, and
add the remote component to the topology definition.
You must complete this task already when you create your original topology
definition. If you already have added the remote component to your topology
definition, skip to the next step.
3. Deploy the new topology that contains the added component by using the
Topology Editor. You can do this by clicking Run> > Run Deployer for
Installation. This pushes the edited topology to the database.
4. Do the following, to open the Deployer on corinth:
Note: The secondary deployer sees the topology data and knows that the
required component is still to be installed on corinth when you do step 3.
5. Follow the on screen instructions to install the wanted component.
Note: You cannot start the deployer simultaneously from two different hosts.
Only one deployer can be active at one time.
Data collection
DataChannel data collection.
The FTE, DLDR, LDR, and PBL components are assigned to each configured
DataChannel. The FTE and CME components are assigned to one or more
Collector subchannels.
Data is moved from one channel component to another as files. These files are
written to and read from staging directories between each component. Within each
staging directory there are subdirectories named do, output, and done. The do
subdirectory contains files that are waiting to be processed by a channel
component. The output subdirectory stores data for the next channel component to
work on. After files are processed, they are moved to the done directory. All file
movement is accomplished by the FTE component.
Data aggregation
A DataChannel aggregates data collected by collectors for eventual use by
DataView reports.
The DataChannel provides online statistical calculations of raw collected data, and
detects real-time threshold violations.
Aggregations include:
v Resource aggregation for every metric and resource
v Group aggregation for every group
v User-defined aggregation computed from raw data
The following table lists the names and corresponding watchdog scripts for the
DataChannel management programs running on different DataChannel hosts.
Corresponding
Component Program Executable* Watchdog Script Notes
Channel Name CNS cnsw Runs on the host
Server running the Channel
Manager.
Log Server LOG logw
Channel Manager CMGR cmgrw
Application Manager AMGR amgrw One per subchannel
host and one on the
Channel Manager
host.
* The actual component's executable file seen in the output of ps -ef is named XXX_visual ,
where XXX is an entry in this column. For example, the file running for CMGR is seen as
CMGR_visual.
The watchdog scripts run every few minutes from cron. Their function is to
monitor their corresponding management component, and to restart it if necessary.
You can add watchdog scripts for the Channel Manager programs to the crontab
for the pvuser on each host on which you installed a DataChannel component.
On such hosts, this is the only line you need to add to the pvuser crontab.
* The actual application's executable file visible in the output of ps -ef is named
XXX_visual, where XXX is an entry in this column.
Note: For historical reasons, the SNMP DataLoad collector is managed by Tivoli
Netcool Performance Manager DataMart, and does not appear in Table 11.
Procedure
v Verify that the DataChannel management programs are running:
1. Log in as pvuser on each DataChannel host.
2. Change to the DataChannel installation's bin subdirectory. For example:
$ cd /opt/datachannel/bin
3. Run the findvisual command:
$ ./findvisual
In the resulting output, look for:
– The AMGR process on every DataChannel host
– The CNS, CMGR, LOG, and AMGR processes on the Channel Manager
host
v If the DataChannel management programs are running on all DataChannel
hosts, start the application programs on all DataChannel hosts by following
these steps:
1. Log in as pvuser. Make sure this login occurs on the host running the
Channel Manager programs.
2. Change to the DataChannel installation's bin subdirectory. For example:
$ cd /opt/datachannel/bin
3. Run the following command to start all DataChannel applications on all
configured DataChannel hosts:
./dccmd start all
The command shows a success message like the following example.
See the Command Line Interface Reference for information about the dccmd
command.
There is a Java process associated with the LOG server that must be stopped
should the proviso.log need to be re-created:
1. To find this Java process, enter the following:
ps -eaf | grep LOG
The output should be similar to the following:
pvuser 15774 15773 0 Nov 29 ?
7:59 java -Xms256m -Xmx384m com.ibm.tivoli.analytics.Main -a LOG
2. Kill this process using the command:
kill -9 15774
15774 - is the pid of the Java process as discovered using the grep command.
3. Restart the LOGW process.
Once you have started the DataChannel components, check every server that hosts
a DataLoad SNMP collector. to make sure the collectors are running. To check
whether a collector is running, run the following command:
ps -ef | grep -i pvmd
If the collector is running, you will see output similar to the following:
pvuser 27118 1 15 10:03:27 pts/4 0:06 /opt/dataload/bin/pvmd -nologo
-noherald /opt/dataload/bin/dc.im -headless -a S
Procedure
1. Log into the server that is running Tivoli Netcool Performance Manager SNMP
DataLoad by entering the username and password you specified when
installing SNMP DataLoad.
2. Source the DataLoad environment file by entering the following command:
./$DLHOME/dataLoad.env
where $DLHOME is the location where SNMP DataLoad is installed on the system
(/opt/dataload, by default).
Note: If DataLoad shares the same server as DataMart, make sure you unset
the environment variable by issuing the following command from a BASH shell
command line:
unset PV_PRODUCT
3. Change to the DataLoad bin directory by entering the following command:
cd $PVMHOME/bin
4. Start the DataLoad SNMP collector using the following command:
pvmdmgr start
The command displays the following message when the SNMP collector has
been successfully started:
PVM Collecting Daemon is running.
Results
The script controlling the starting and stopping of SNMP collectors, pvmdmgr,
prevents the possibility that multiple collector instances can be running
simultaneously.
If a user starts a second instance, that second instance will die by itself in under
two minutes without ever contacting or confusing the relevant watchdog script.
Two channels running on the same system share a common Application Manager
(AMGR) that has a watchdog script, amgrw. The AMGR is responsible for starting,
monitoring through watchdog scripts, and gathering status for each application
server process for the system it runs on. Application programs include the FTE,
CME, LDR, and DLDR programs.
Each program has its own set of program and staging directories.
After a manual start, the program's watchdog script restarts the program as
required.
Procedure
v To start the Channel Manager programs manually:
1. Log in as pvuser on the host running the Channel Manager programs.
2. At a shell prompt, change to the dataChannel/bin subdirectory. For example:
$ cd /opt/datachannel/bin
3. Enter the following commands at a shell prompt, in this order:
For the Channel Name Server, enter:
./cnsw
For the Log Server, enter:
./logw
For the Channel Manager, enter:
./cmgrw
For the Application Manager, enter:
./amgrw
v To manually start the DataChannel programs on all hosts in your DataChannel
configuration:
1. Start the Channel Manager programs, as described in the previous section.
2. On each DataChannel host, start the amgrw script.
3. On the Channel Manager host, start the application programs as described in
“Starting the DataChannel management programs” on page 242.
If you add and configure a new remote DataChannel using the Topology Editor
after the initial deployment of your topology, the system will not pick up these
changes, unless the user manually stop starts the relevant processes, as explained
in Chapter 8, “Modifying the current deployment,” on page 187.
Note: The DataChannel CMGR, CNS, AMGR, and LOG visual processes must
remain running until you have gathered the DataChannel parameters from your
environment.
Procedure
1. On the DataChannel host, log in as the component user, such as pvuser.
2. Change your working directory to the DataChannel bin directory
(/opt/datachannel/bin by default) using the following command: $ cd
/opt/datachannel/bin
3. Shut down the DataChannel FTE.
Prior to shutting down all DataChannel components, some DataChannel work
queues must be emptied.
To shut down the DataChannel FTE and empty the work queues:
$ ./dccmd stop FTE.*
4. Let all DataChannel components continue to process until the .../do directories
for the FTE and CME components contain no files.
The .../do directories are located in the subdirectories of $DCHOME (typically,
/opt/datachannel) that contain the DataChannel components - for example,
FTE.1.1, CME.1.1.
5. Shut down all CMEs on the same hour (So the operator state files will be in
synch with each other). To accomplish this:
a. Identify the leading CME by either looking at the do and done directories
in each CME and the DAT files inside there; or using dccmd status all to see
which CME is reporting the latest hour in its processing status.
b. All CMEs on that hour must be stopped and then continue using the same
approach to finding the hour being processed and stop each CME as it
reaches the same hour until all CMEs are stopped. CMEs are stopped using
the command:
$ ./dccmd stop CME
6. Use the following dccmd commands to stop the DataChannel applications:
$ ./dccmd stop DLDR
$ ./dccmd stop LDR
$ ./dccmd stop FTE
$ ./dccmd stop DISC
$ ./dccmd stop UBA (if required)
Note: For details on how to restart a DataChannel, see “Manually starting the
Channel Manager programs” on page 244.
Overview
An aggregation set is a grouping of network management raw data and computed
statistical information stored in the Tivoli Netcool Performance Manager database
for a single timezone.
For example, if your company provides network services to customers in both the
Eastern and Central US timezones, you must configure two aggregation sets.
Because each aggregation set is closely linked with a timezone, aggregation sets are
sometimes referred to as timezones in the in Tivoli Netcool Performance Manager
documentation. However, the two concepts are separate.
When you configure an aggregation set, the following information is stored in the
database:
v The timezone ID number associated with this aggregation set.
v The timezone offset from GMT, in seconds.
v Optionally, the dates that Daylight Savings Time (DST) begins and ends in the
associated timezone for each year from the present through 2014. (Or you can
configure an aggregation set to ignore DST transitions.)
Procedure
1. Log in as root. (Remain logged in as root for the remainder of this appendix.)
2. At a shell prompt, change to the directory where Tivoli Netcool Performance
Manager DataMart program files are installed. For example:
# cd /opt/datamart
3. Load the DataMart environment variables into your current shell's
environment using the following command:
# . ./dataMart.env
4. Change to the bin directory:
# cd bin
Menu :
Choice : 1
6. Type 1 at the Choice prompt and pressEnter to enter the password for
PV_ADMIN. The script prompts twice for the password you set up for PV_ADMIN.
==> Enter password for PV_ADMIN : PV
==> Re-enter password : PV
Note: The script obtains the DB_USER_ROOT setting from the Tivoli Netcool
Performance Manager database configured in previous chapters, and
constructs the name of the Tivoli Netcool Performance Manager database
administrative login name, PV_ADMIN, from that base. If you set a different
setting, the "Database user" entry reflects your settings. For example, if you
previously set DB_USER_ROOTDB_USER_ROOT=PROV, this script would generate the
administrative login name PROV_ADMIN.
7. To configure the first aggregation set, type 2 at the Choice prompt and press
Enter twice.
The script shows the current settings for the aggregation set with ID 0
(configured by default):
The following Time Zones are defined into the Database :
___________________________________________________________________________________
id | Date (in GMT) | offset in | Name | Aggsetstatus
| | seconds | |
___________________________________________________________________________________
0 | 1970/01/01 00:00:00 | 0 | GMT | Aggset created
==> Press <Enter> to continue ....
You can use this aggregation set as-is, or modify it to create a new timezone.
8. Press Enter. A list of predefined timezones and their timezone numbers is
displayed:
9. Type the number of the timezone you want to associate with aggregation set
0. For example, type 9 for Eastern Standard Time.
The script prompts:
==> Select an Aggset ID to add/modify (E: Exit) : 0
To associate the specified timezone, EST, with the database's default
aggregation set, type 0.
10. The script asks whether you want your aggregation set to include Daylight
Saving Time (DST) transition dates:
Does your Time Zone manage DST [Y/N] : Y
For most time zones, type Y and press Enter.
11. The script displays the results:
Note: The dates that appear in your output will most likely be different from
the dates that appear in the example.
12. Press Enter to return to the script's main menu.
13. To configure a second aggregation set, type 2 at the Choice prompt and press
Enter three times.
14. Specify the timezone number of your second timezone. For example, type 8 to
specify Central Standard Time.
The script prompts:
==> Select an Aggset ID to add/modify (E: Exit) : 1
If you enter a set number that does not exist in the database, the script creates
a new aggregation set with that number. Type the next available set number,
1.
15. Respond Y to the timezone management query.
The script shows the results of creating the second aggregation set:
____________
The following Time Zone has been modified :
___________________________________________________________________________________
id | Date (in GMT) | offset in | Name | Aggset status
| | seconds | |
___________________________________________________________________________________
1 | 2004/09/29 23:00:00 | -18000 | CST_2004_DST | Aggset created
1 | 2004/10/31 07:00:00 | -21600 | CST_2004 | Aggset created
1 | 2005/04/03 08:00:00 | -18000 | CST_2005_DST | Aggset created
1 | 2005/10/30 07:00:00 | -21600 | CST_2005 | Aggset created
1 | 2006/04/02 08:00:00 | -18000 | CST_2006_DST | Aggset created
1 | 2006/10/29 07:00:00 | -21600 | CST_2006 | Aggset created
1 | 2007/04/01 08:00:00 | -18000 | CST_2007_DST | Aggset created
1 | 2007/10/28 07:00:00 | -21600 | CST_2007 | Aggset created
1 | 2008/04/06 08:00:00 | -18000 | CST_2008_DST | Aggset created
1 | 2008/10/26 07:00:00 | -21600 | CST_2008 | Aggset created
1 | 2009/04/05 08:00:00 | -18000 | CST_2009_DST | Aggset created
1 | 2009/10/25 07:00:00 | -21600 | CST_2009 | Aggset created
1 | 2010/04/04 08:00:00 | -18000 | CST_2010_DST | Aggset created
1 | 2010/10/31 07:00:00 | -21600 | CST_2010 | Aggset created
==> Press <Enter> to continue ....
16. Press Enter to return to the main menu, where you can add more aggregation
sets, or type 0 to exit.
Procedure
1. Make sure your EDITOR environment variable is set.
2. Change to the /opt/Proviso directory:
cd /opt/Proviso
3. Start the setup program:
./setup
The setup program's main menu is displayed:
Tivoli Netcool Performance Manager <version number> - [Main Menu]
1. Install
2. Upgrade
3. Uninstall
0. Exit
Choice [1]> 1
4. Type 1 at the Choice prompt and press Enter. The Install menu is displayed:
Tivoli Netcool Performance Manager <version number> - [Install]
1. Tivoli Netcool Performance Manager Database Configuration
0. Previous Menu
Choice [1]> 1
Procedure
1. Type 1 at the Choice prompt and press Enter. Setup displays the installation
environment menu:
Note: Menu options 2, 3, and 4 are used later in the installation process.
2. Make sure the value for PROVISO_HOME is the same one you used when
you installed the database configuration. If it is not, type 1 at the Choice
prompt and correct the directory location.
3. The script displays the component installation menu:
Tivoli Netcool Performance Manager Database Configuration <version number> -
[component installation]
1. Database
2. Channel
3. Aggregation set
0. Exit
Choice [1]> 3
4. Type 3 at the Choice prompt and press Enter. The script displays the
installation environment menu:
Tivoli Netcool Performance Manager Aggregation Set <version number> -
[installation environment]
1. PROVISO_HOME : /opt/Proviso
2. DATABASE_HOME : /opt/oracle/product/12.1.0 or /opt/db2/product/10.1.0
3. ORACLE_SID : PV
4. DB_USER_ROOT : -
5. Continue
0. Previous Menu
Choice [5]> 4
5. Type 4 at the Choice prompt and press Enter to specify the same value for
DB_USER_ROOT that you specified in previous chapters. This manual's
default value is PV.
Enter value for DB_USER_ROOT [] : PV
6. Make sure that the values for PROVISO_HOME, DATABASE_HOME, and
ORACLE_SID or DB_NAME are the same ones you entered in previous
chapters. Correct the values if necessary.
7. Type 5 at the Choice prompt and press Enter. Setup displays the Aggregation
Set installation options menu:
Tivoli Netcool Performance Manager Aggregation Set <version number> -
[installation options]
1. List of configured aggregation sets
2. List of installed aggregation sets
3. Number of the aggregation set to install : -
4. Channel where to install aggregation set : (all)
5. Start date of aggregation set : <Current Date>
6. Continue
0. Back to options menu
Choice [6]>
Note: Do not change the value for option 4. Retain the default value, "all."
10. Select option 2 to list the aggregation sets already installed. The output is
similar to the following:
============== LIST OF CREATED AGGREGATION SETS ==============
============ X: created ==== #: partially created ============
Channels 0
| 1
AggSets -----------------------------------------------------------------------
| 0 X
Press enter...
13. When all menu parameters are set, type 6 at the Choice prompt and press
Enter.
Procedure
1. The script prompts that it will start the editor specified in the EDITOR
environment variable and open the aggregation set parameters file. Press Enter.
An editing session opens containing the aggsetreg.udef configuration file, as
shown in this example:
#
# Tivoli Netcool Performance Manager Datamart
# <Current Date>
#
#
# Channel C01: GROUPS DAILY aggregates storage
#
[AGGSETREG/C01/1DGA/TABLE/CURRENT]
PARTITION_EXTENTS=5
PARTITION_SIZE=100K
#
[AGGSETREG/C01/1DGA/TABLE/HISTORIC]
PARTITION_EXTENTS=5
PARTITION_SIZE=100K
#
[AGGSETREG/C01/1DGA/TABLESPACE/CURRENT]
CREATION_PATH=/raid_2/oradata
EXTENT_SIZE=64K
SIZE=10M
#
[AGGSETREG/C01/1DGA/TABLESPACE/HISTORIC]
CREATION_PATH=/raid_3/oradata
EXTENT_SIZE=64K
SIZE=10M
#
# Channel C01: RESOURCES DAILY aggregates storage
#
[AGGSETREG/C01/1DRA/TABLE/CURRENT]
PARTITION_EXTENTS=5
PARTITION_SIZE=100K
#
...
2. Do not make changes to this file unless you have explicit instructions from
Professional Services.
Only if you have guidelines from Professional Services for advanced
configuration of your aggregation sets, make the suggested edits.
Save and close the file.
3. When you close the configuration file, the script checks the file parameters and
starts installing the aggregation set. The installation takes three to ten minutes,
depending on the speed of your server.
A message like the following is displayed when the installation completes:
You can link a defined timezone to a calendar you create in the DataView GUI, or
the CME Permanent calendar (a 24-hour calendar).
When you link a group to a specific timezone and calendar, all subgroups inherit
the same timezone and calendar.
Procedure
v Best practice:
Use a separate calendar for each timezone. If you link multiple timezones to the
same calendar, a change to one timezone calendar setting will affect all the
timezones linked to that calendar.
v To link a group to a timezone:
1. Create a calendar with the DataView GUI, or use the default CME
Permanent calendar.
2. Create a text file (for example, linkGroupTZ.txt) with the following format:
– Each line has three fields separated by |_|.
– The first field is a DataView group name.
– The second field is a timezone name from the Tivoli Netcool Performance
Manager internal timezone list. See “Configuring aggregation sets” on
page 249 for a list of timezone names.
– The third field is the name of the calendar you create, or CME Permanent.
The following example line demonstrates the file format:
~Group~USEast|_|EST_2005_DST|_|CME Permanent|_|
Enter as many lines as you have timezone entries in your aggregation set
configuration.
3. At a shell prompt, enter a command similar to the following, which uses the
Resource Manager's CLI to link the group to the timezone:
resmgr -import segp -colNames "npath tz.name cal.name" -file linkGroupTZ.txt
v To unlink a timezone:
– Use the resmgr command. For example:
resmgr -delete linkGroupSE_TZC -colNames "npath tz.name cal.name" -file linkGroupTZ.txt
v To review timezone to group associations:
– Use the resmgr command. For example:
resmgr -export segp -colNames "name tz.name cal.name" -file link.txt
To run the deployer from the command line, entering the following command:
# ./deployer.bin [options]
Option Description
-DDB2Client=DB2_client_home
-DDB2ServerHost=hostname
-DDB2ServerPort=port
-DDBName=database name
-DDB2AdminUser=admin_user
-DDB2AdminPassword=
admin_password
You can use the -DTarget option to force an install or uninstall of a component in a
high-availability (HA) environment, or when fixing an incomplete or damaged
installation. The -DTarget option uses the following syntax:
deployer.bin -DTarget=id
If you are using the -DTarget option to force the uninstall of a component, you
must also specify the -Daction=uninstall option when you run the deployer
application. The following example shows how to force the uninstallation of
DataMart on the local system:
deployer.bin -Daction=uninstall -DTarget=DMR
Value Description
DB Instructs the deployer to install the database
setup components on the local machine.
DM Instructs the deployer to install the
DataMart component on the local machine.
DV Instructs the deployer to install the
DataView component on the local machine.
DC Instructs the deployer to install the
DataChannel component on the local
machine.
DL Instructs the deployer to install the
DataLoad component on the local machine.
DBR Instructs the deployer to remove the
database setup components from the local
machine. Requires the -Daction=uninstall
option.
DMR Instructs the deployer to remove the
DataMart component from the local
machine. Requires the -Daction=uninstall
option.
When you run the deployer using the -DTarget option, note the following:
v The deployer does not perform component registration in the versioning tables
of the database.
v The deployer does not upload modified topology information to the database.
v The deployer does not allow you to you select other nodes besides the local
node in the Node Selection panel.
v In the case of an uninstall, the deployer does not remove the component from
the topology.
Overview
How to install OpenSSH for Secure File Transfer (SFTP) among Tivoli Netcool
Performance Manager components.
This document explains how to install OpenSSH for Secure File Transfer (SFTP)
among Tivoli Netcool Performance Manager components. You must be proficient in
your operating system and have a basic understanding of public/private key
encryption when working with SFTP. For the purposes of this document, an SFTP
"client" is the node that initiates the SFTP connection and login attempt, while the
SFTP "server" is the node that accepts the connection and permits the login
attempt. This distinction is important for generating public/private keys and
authorization, as the SFTP server should have the public key of the SFTP client in
its authorized hosts file. This process is described in more detail later.
For Tivoli Netcool Performance Manager to use SFTP for the remote execution of
components and file transfer, OpenSSH must be configured for key-based
authentication when connecting from the Tivoli Netcool Performance Manager
account on the client (the host running the Tivoli Netcool Performance Manager
process that needs to use SFTP) to the account on the server. In addition, the host
keys must be established such that the host key confirmation prompt is not
displayed during the connection.
Enabling SFTP
The use of SFTP is supported in Tivoli Netcool Performance Manager.
Tivoli Netcool Performance Manager SFTP can be enabled for a single component,
set of components, or all components as needed. This table shows the Tivoli
Netcool Performance Manager components that support SFTP:
For detailed information about OpenSSH and its command syntax, visit the
following URL:
http://www.openssh.com/manual.html
Installing OpenSSH
This section describes the steps necessary to install OpenSSH on AIX, Solaris and
Linux.
Note: The following sections refer to the earliest supported version of the required
packages. Refer to the OpenSSH documentation for information about updated
versions.
AIX systems
To install OpenSSH on AIX systems you must follow all steps described in this
section.
Procedure
1. In your browser, enter the following URL:
http://www-03.ibm.com/servers/aix/products/aixos/linux/download.html
2. From the AIX Toolbox for Linux Applications page, download the following
files according to the instructions to each Tivoli Netcool Performance Manager
system where SFTP is to be used:
v prngd - Pseudo Random Number Generation Daemon (prngd-0.9.29-
1.aix5.1.ppc.rpm or later).
v zlib - zlib compression and decompression library (zlib-1.2.2-
4.aix5.1.ppc.rpm or later).
3. From the AIX Toolbox for Linux Applications page, click the AIX Toolbox
Cryptographic Content link.
4. Download the following files to each Tivoli Netcool Performance Manager
system where SFTP is to be used:
openssl-0.9.7g-1.aix5.1.ppc.rpm or later
5. In your browser, enter the following URL:
http://sourceforge.net/projects/openssh-aix
6. From the OpenSSH on AIX page, search for and download the following files
according to the instructions to each Tivoli Netcool Performance Manager
system where SFTP is to be used:
openssh-4.1p1_53.tar.Z or later
Procedure
1. Log in to the system as root.
2. Change your working directory to the location where the software packages
have been downloaded by using the following command:
# cd /download/location
3. Run the RPM Packaging Manager for each package, in the specified order,
using the following commands:
# rpm -i zlib
# rpm -i prndg
# rpm -i openssl
4. Uncompress and untar the openssh tar file by entering the following
commands:
$ uncompress openssh-4.1p1_53.tar.Z
$ tar xvf openssh-4.1p1_53.tar
5. Using the System Management Interface Tool (SMIT), install the openssh
package.
6. Exit from SMIT.
Procedure
case "$1" in
’start’)
# Start the ssh daemon
if [ -x /usr/local/sbin/sshd ]; then
echo "starting SSHD daemon"
/usr/local/sbin/sshd & fi
;;
’stop’)
# Stop the ssh daemon
kill -9 `ps -eaf | grep /usr/local/sbin/sshd | grep -v grep |
awk ’{print $2}’ | xargs`
;;
*)
echo "usage: sshd {start|stop}"
;;
OpenSSH is required for SFTP to work with Tivoli Netcool Performance Manager
on Solaris systems.
The version of SSH installed with the Solaris 10 operating system is not supported.
Note: The following sections refer to the current version of the required packages.
Refer to the OpenSSH documentation for information about updated versions.
To install OpenSSH on Solaris systems, follow all steps described in this section.
Procedure
1. In your browser, enter the following URL: http://www.sunfreeware.com
2. From the Freeware for Solaris page, follow the instructions to download the
following files to each Tivoli Netcool Performance Manager system where SFTP
is to be used. Ensure that you download the correct files for your version of
Solaris.
v gcc - Compiler. Ensure that you download the full Solaris package and not
just the source code (gcc-3.2.3-sol9-sparc-local.gz or later).
v openssh - SSH client (openssh-4.5p1-sol-sparc-local.gz or later).
v openssl - SSL executable files and libraries (openssl-0.9.8d-sol9-sparc-local.gz
or later).
v zlib - zlib compression and decompression library (zlib-1.2.3-sol9-sparc-
local.gz or later).
What to do next
Note: The user should ensure they have the libcrypto.so.0.9.8 instead of
libcrypto.so.1.0.0. to use OpenSSH on Solaris.
Procedure
1. Log in to the system as root.
2. Change your working directory to the location where the software packages
have been downloaded using the following command:
# cd /download/location
3. Copy the downloaded software packages to /usr/local/src, or a similar
location, using the following commands:
case "$1" in
’start’)
’stop’)
# Stop the ssh daemon
/usr/bin/pkill -x sshd
;;
*)
echo "usage: /etc/init.d/sshd {start|stop}"
;;
2. Check that /etc/rc3.d/S89sshd exists (or any sshd startup script exists) and is
a soft link to /etc/init.d/sshd.
If not, create it using the following command:
ln -s /etc/init.d/sshd /etc/rc3.d/S89sshd
Linux systems
OpenSSH is required for VSFTP to work with Tivoli Netcool Performance Manager.
OpenSSH is installed by default on any RHEL system.
Configuring OpenSSH
This section describes how to configure the OpenSSH server and client.
To configure the OpenSSH Server, follow these steps on each Tivoli Netcool
Performance Manager system where SFTP is to be used:
Procedure
1. Log in to the system as root.
2. Change your working directory to the location where the OpenSSH Server was
installed (/usr/local/etc/sshd_config by default) using the following
command:
# cd /usr/local/etc
3. Using the text editor of your choice, open the sshd_config file. This is an
example of a sshd_config file:
The OpenSSH client requires no configuration if it used in its default form. The
default location for the OpenSSH client file is /usr/local/etc/ssh_config.
Procedure
1. Log in as pvuser on the node that will be the SFTP client. This node is
referred to as SFTPclient in these instructions, but you must replace
SFTPclient with the name of your node.
Note: The directory that contains the .ssh directory might also need to be
writable by owner.
11. The first time you connect using SSH or SFTP to the other node, it will ask if
the public key fingerprint is correct, and then save that fingerprint in
known_hosts. Optionally, you can manually populate the client's known_hosts
file with the server's public host key (by default, /usr/local/etc/
ssh_host_dsa_key.pub).
For large-scale deployments, a more efficient and reliable procedure is:
a. From one host, ssh to each SFTP server and accept the fingerprint. This
builds a master known_hosts file with all the necessary hosts.
Note: If the known_hosts file has not been populated and secure file
transfer (SFTP) is attempted through Tivoli Netcool Performance Manager,
SFTP fails with vague errors.
For the following tests, the commands normally work without asking for a
password. If you are prompted for a password, public/private key encryption is
not working.
Ensure that you specify the full path to the ssh and sshd binary files. Otherwise,
you might use another previously installed SSH client or server.
Procedure
1. On both nodes, kill any existing sshd processes and start the sshd process from
the packages you installed, by entering the following commands:
pkill -9 sshd;/usr/local/sbin/sshd &
The path can be different depending on the installation.
2. From SFTPclient, run the following command:
/usr/local/bin/ssh SFTPserver
3. From SFTPclient, run the following command:
/usr/local/bin/sftp SFTPserver
4. Optional: If you set up bidirectional SFTP, run the following command from
SFTPserver:
/usr/local/bin/ssh SFTPclient
5. Optional: If you set up bidirectional SFTP, run the following command from
SFTPserver:
/usr/local/bin/sftp SFTPclient
6. If all tests allow you to log in without specifying a password, follow the Tivoli
Netcool Performance Manager instructions on how to enable SFTP in each
Tivoli Netcool Performance Manager component. Make sure to specify the full
path to SSH in the Tivoli Netcool Performance Manager configuration files. In
addition, make sure the user that Tivoli Netcool Performance Manager is run as
is the same as the user that you used to generate keys.
If you find that OpenSSH is not working properly with public keys:
Procedure
1. Check the ~/known_hosts file on the node acting as the SSH client and make
sure the client host name and IP information is present and correct.
2. Check the ~/authorized_keys file on the node acting as the SSH server and
make sure that the client public key is present and correct. Ensure that the
permissions are -rw-r--r--.
3. Check the ~/id.dsa file on the node acting as the SSH client and make sure
that the client's private key is present and correct. Ensure that the permissions
are -rw-------.
4. Check the ~/.ssh directory on both nodes to ensure that the permissions on
the directories are -rwx------.
5. Check for syntax errors (common ones are misspelling authorized_keys and
known_hosts without the "s" at the end). In addition, if you copied and pasted
keys into known hosts or authorized keys files, you probably have introduced
carriage returns in the middle of a single, very long line.
6. Check the ~ (home directory) permissions to ensure that they are only writable
by owner.
7. If the permissions are correct, kill the sshd process and restart in debug mode
as follows:
pkill -9 sshd /usr/local/sbin/sshd -d
8. Test SSH again in verbose mode on the other node by entering the following
command:
/usr/local/bin/ssh -v SFTPserver
9. Read the debugging information about both client and server and
troubleshoot from there.
10. Check the log file /var/adm/messages for additional troubleshooting
information.
In the Tivoli Netcool Performance Manager log files, you might see the following
errors:
v [DC10120] FTPERR error: incompatible version, result: sftp status:
SSH2_FX_FAILURE:: incompatible version, log:
This error indicates that the SSH server (sshd) is SSH2 rather than OpenSSH.
OpenSSH is required for Tivoli Netcool Performance Manager to function
correctly.
v [DC10120] FTPERR error: bad version msg, result: sftp status:
SSH2_FX_NO_CONNECTION:: connection not established - check ssh
configuration, log:
LDAP configuration
There are two scenarios for LDAP Configuration; Configure LDAP during
installation of Tivoli Netcool Performance Manager and configure LDAP after the
Tivoli Netcool Performance Manager installation is complete.
When the DataView installation is complete, it should have created two users and
two groups in LDAP:
Users:
v tnpm
v tnpmScheduler
Groups:
v tnpmUsers
v tnpmAdministrators
Verify from the UI that the users tnpm and tnpmScheduler are members of the
tnpmAdministrators group.
Procedure
To successfully authenticate you LDAP user, you must assign them to one of the
following roles:
v tnpmUser
v tnpmAdministrator
This can be done by smadmin user, by navigating to Users and Groups > User
Roles, and assigning the correct roles.
Alternatively tipcli.sh can be used for assigning roles to the user.
JazzSM/profile/bin/tipcli.sh MapRolesToUser --username
JazzSM_HOME<DASH_admin_user> --password <DASH_admin_password>
--userID <userUniqueId> --rolesList <roleName>
Where <userUniqueId> is the concatenation of user name and realm in which user
information is stored.
For example:
JazzSM_HOME/JazzSM/profile/bin/tipcli.sh MapRolesToUser --username
<DASH_admin_user> --password <DASH_admin_password> --userID
uid=<user_name>,dc=<server>,dc=<server>,dc=<company>,dc=com --rolesList
tnpmUser
What to do next
Assign tnpmUser and tnpmAdministrator roles to the LDAP users at the Jazz for
Service Management User Roles page after LDAP integration is done. Otherwise,
LDAP users might not able to view the DataView reports.
Overview
Interim fix installation overview.
Unlike major, minor, and maintenance releases, which are planned, patch releases
(interim fixes and fix packs) are unscheduled and are delivered under the
following circumstances:
v A customer is experiencing a "blocking" problem and cannot wait for a
scheduled release for the fix.
v The customer's support contract specifies a timeframe for delivering a fix for a
blocking problem and that timeframe does not correspond with a scheduled
release.
v Development determines that a patch is necessary.
Note: Patches are designed to be incorporated into the next scheduled release,
assuming there is adequate time to integrate the code.
Installation rules
Rules that apply to the installation of patches.
The patch installer verifies that your installation conforms to these rules.
If remote installation of a component is not possible, the deployer grays out the
any remote component host on the node selection page.
The maintenance deployer must run locally on each DataMart host to apply a
patch.
A patch release updates the file system for the component that the patch is
intended for and updates the versioning information in the database.
To verify that the versioning was updated correctly for the components in the
database, you can run several queries both before and after the installation and
compare the results. For detailed information, see the Tivoli Netcool Performance
Manager Technical Note: Tools for Version Reporting document.
Installing a patch
How to install a patch.
To install a patch:
Procedure
1. You must have received or downloaded the maintenance package from IBM
Support. The maintenance package contains the Maintenance Descriptor File,
an XML file that describes the contents of the fix pack. Follow the instructions
in the README for the fix pack release to obtain the maintenance package
and unzip the files.
Note: for each tar.gz file, you must unzip them, and then un-tar them. For
example:
gunzip filename.tar.gz
tar -xvf filename.tar
2. Log in as root.
3. Set and export your DISPLAY environment variable (see “Setting up a remote
X Window display” on page 70).
4. Start the patch deployer using one of the following methods:
From the launchpad:
a. Click Start Tivoli Netcool Performance Manager Maintenance Deployer
option in the list of tasks.
b. Click the Start Tivoli Netcool Performance Manager Maintenance
Deployer link.
From the command line:
v Run the following command:
# ./deployer.bin -Daction=patch
5. The deployer displays a welcome page. Click Next to continue.
6. Accept the default location of the base installation directory of the database
JDBC driver.
/opt/oracle/product/12.1.0-client32
/opt/db2/product/10.1.0/java
Specifies the SID for the database. The default value is PV.
Error codes
The following sections describe the error messages generated by the Deployer, the
Toplogy Editor and InstallAnywhere.
Deployer messages
The Deployer messages.
Table 13 lists the error messages returned by the Tivoli Netcool Performance
Manager deployer.
DataMart Messages
GYMCI5101E The DataMart installation See the DataMart installer
failed. logs for details.
Database Configuration Messages
GYMCI5201E The database installation See the root_install_dir
failed. See the installation log /database/install/log/
for details. Oracle_SID /install.log file.
GYMCI5202E The database uninstallation Check the syntax and run
script failed because of a the script again.
syntax error. This script must
be run as oracle. For
example: ./uninstall_db
/var/tmp/PvInstall/
install.cfg.silent
GYMCI5204E The database could not be Check that all the required
removed because some Oracle variables are set and
Oracle environment variables try again.
are not correctly set. Some or
all of the Oracle environment
variables are not set (for
example, ORACLE_HOME,
ORACLE_SID, or
ORACLE_BASE).
InstallAnywhere messages
The InstallAnywhere messages.
™
Table 14 lists the InstallAnywhere error messages. These messages could be
returned by either the deployer or the Topology Editor. See the InstallAnywhere
documentation for more information about these error codes and how to resolve
them.
Several files are used to log errors for the Tivoli Netcool Performance Manager
components and its underlying framework. These log files include:
v “COI log files”
v “Deployer log file”
v “Eclipse log file” on page 303
v “Trace log file” on page 303
See the Technology Pack Installation Guide for information about the technology pack
log files.
The Composite Offering Installer (COI) adds a layer called the COI Plan to the
Tivoli Netcool Performance Manager installation. The COI Plan consists of a set of
COI Machine Plans, one for each machine where Tivoli Netcool Performance
Manager components should be installed. A COI Machine Plan is a collection of
COI Steps to be run on the corresponding machine.
The Eclipse framework logs severe problems in a file under the Topology Editor
installation directory (for example, /opt/IBM/Proviso/topologyEditor/workspace/
.metadata). By default, the Eclipse log file is named .log. You should not need to
look there unless there is a problem with the underlying Eclipse framework.
The trace log file is located in the Topology Editor installation directory (for
example, /opt/IBM/Proviso/topologyEditor). By default, this file is named
topologyEditorTrace and the default trace level is FINE.
Procedure
1. In the Topology Editor, select Window > Preferences. The Log Preferences
window opens.
2. Select the new trace level. If desired, change the name of the log file.
3. Click Apply to apply your changes. To revert back to the default values, click
Restore Defaults.
4. Click OK to close the window.
Deployment problems
A list of deployment problem descriptions and solutions.
Problem Solution
The deployer window does not automatically Cause: In some cases (for example, when you export the display on
become the focus window after launching a VNC session on Linux systems), the deployer window does not
from it from the Topology Editor. get the focus.
1. When the user tries to launch the Firefox Cause: Cairo 1.4.10 may not support the requested image format.
browser an error is displayed regarding
User action: Start VNC server using the following command:
the Cairo 1.4.10 package.
/usr/bin/X11/vncserver -depth 24 -geometry 1280x1024
2. In the Topology Editor, the minimize and
maximize buttons are red in color.
Note: This is an in-built feature in Eclipse
4.2.2. See, http://stackoverflow.com/
questions/12245102/eclipse-juno-red-
minimize-and-maximize-buttons-on-linux
In a fresh installation, the database Cause: You did not perform the necessary preparatory steps.
installation step fails.
User action: This step verifies that the Oracle Listener is working
properly before actually creating the Tivoli Netcool Performance
Manager database. If the step fails: 1. Complete the necessary
manual steps (see “Configure the Oracle listener” on page 123). 2.
Change the status of the step to Ready. 3. Resume the installation.
The step should complete successfully.
User action:
1. Make sure the installation step is really in a hung state. For
example, the Tivoli Netcool Performance Manager
database-related steps might take more than an hour to complete;
other steps complete in far less time.
2. Determine which child process is causing the hang. First, find the
installer process by entering the following command:
ps -ef
User action:
1. Make sure that the CMGR process is running (see “Management
programs and watchdog scripts” on page 240 and “Starting the
DataChannel management programs” on page 242).
2. Restart DataView.
When you install Tivoli Netcool Performance Manager components, the deployer
creates a set of temporary configuration files that are used during the installation
process. These files specify the components that are to be installed on a target
system and the deployment information required to install them. You can use these
configuration files to troubleshoot a Tivoli Netcool Performance Manager
installation.
The temporary configuration files are normally removed from the target system
when the deployer completes the installation process. You can prevent the
deployer from removing the files by editing the installer XML file associated with a
component. This file is named n_comp_name.xml, where n is an index number
generated by the deployer and comp is a string that identifies the component.
Possible values for the comp string are DataView, DataMart, DataView, DBChannel and
DBSetup. Installer XML files are located by default in the /tmp/ProvisoConsumer/
Plan/MachinePlan_hostname directory, where hostname is the host name of the
target system.
To prevent the deployer from removing the temporary files associated with a
component install, open the corresponding install XML file and modify the
following element so that the value of the arg2 property is false:
<equals arg1="${remove.temporary.files}" arg2="true"/>
The following excerpt from the file shows the resulting XML element:
<equals arg1="${remove.temporary.files}" arg2="false"/>
When you contact IBM support about a Tivoli Netcool Performance Manager
installation problem, the support staff might ask you for these files. You can create
a tar file or zip archive that contains the entire contents of the
/tmp/ProvisoConsumer directory and send it to the IBM support staff for assistance.
Problem Solution
A Tivoli Netcool Performance Manager Cause: The component is installed, but has
component is still listed as Configured in not been started. User action: Start the
the Topology Editor even though is installed. component. Its status changes to Installed.
A new channel component was deployed, or Cause: The channel components need to be
the channel configuration was changed, but bounced. User action: Bounce the
the change has no effect. components, as described in Appendix B,
“DataChannel architecture,” on page 239.
Problem Solution
The Topology Editor won't open and the application Cause: You forgot to set and export your DISPLAY
window shows a Java exception (core dump). variable. User action: 1. Enter the following commands:
$ DISPLAY=Host_IP_Address:0.0 $ export DISPLAY
The following error might cause the Tivoli Netcool Performance Manager database
schema installation to fail.
SQL0440N No authorized routine named "DB2AUTH.ADD_USER_PL" of type "PROCEDURE"
having compatible arguments was found.
This error indicates that the db2auth related files, which are needed during the DB2
database installation are missing. These files are normally copied over during
For example:
# cd <DIST_DIR>/proviso/RHEL/DataBase/RHEL<version_num>/db2/instance
# cp db2authDaemon db2auth.so /opt/db2ns/sqllib/security64/plugin/group
# ls -lrt /opt/db2ns/sqllib/security64/plugin/group
-rwxr-xr-x 1 root root 39041 Sep 28 06:34 db2authDaemon
-rwxr-xr-x 1 root root 39865 Sep 28 06:34 db2auth.so
Telnet problems
A list of Topology Editor problems and solutions.
Problem Solution
Telnet client fails at initial connection and reports the Cause: Length of the DISPLAY variable passed via the
following error: Not enough room in buffer for display telnet client is too long (for example,
location option reply Can occur when you start Tivoli XYZ-DA03430B70B-009034197130.example.com:0.0). User
Netcool Performance Manager components from a action: Set the value of the DISPLAY variable using the
Solaris 10 system where the user interface is displayed IP address of the local system, or the hostname only
remotely on a Windows desktop using an X Window tool without the domain name. Then, reconnect to the Solaris
like Exceed. 10 machine using the telnet client.
Java problems
A list of Java problems and solutions.
Problem Solution
Installer reports a Java Not Found error Cause: The installer expected, but did not
during installation of technology packs. find, Java executables in the path reported in
the error message. The technology pack
installation requires the correct path in order
to function. User action: Create a symbolic
link from the reported directory to the
directory on the system where the Java
executables are installed, for example:
ln -s bin_path $JAVA_HOME/bin/java
Procedure
1. Make sure you are logged in as oracle and that the DISPLAY environment
variable is set.
2. Enter the following command:
$ sqlplus system/[email protected]
In this syntax:
v password is the password you set for the Oracle system login name. (The
default password is manager.)
v PV is the TNS name for your Tivoli Netcool Performance Manager database
defined in your Oracle Net configuration.
For example:
$ sqlplus system/[email protected]
3. Output like the following example indicates a successful connection:
SQL*Plus: Release 12.1.0.2.0 - Production on <Current Date>
Connected to:
Oracle12c Enterprise Edition Release 12.1.0.2 - Production
With the Partitioning option
JServer Release 12.1.0.2.0 - Production
SQL>
In the Oracle Net configuration, you set up an Oracle listener to wait for
connections using external procedure calls.
Procedure
1. Make sure you are logged in as oracle and that the DISPLAY environment
variable is set.
2. At a shell prompt, change to the following directory path:
310 IBM Tivoli Netcool Performance Manager: Installation Guide
$ cd $ORACLE_BASE/admin/skeleton/bin
3. Run the checkextc script, using the system database login name and password
as a parameter:
$ ./checkextc system/password
For example:
$ ./checkextc system/manager
4. Output like the following example indicates a successful test.
checkextc - Checking the installation of the library libpvmextc.so
2- Check Version
You can copy your custom DataView content, such as JSP pages, CSS and images,
from a remote Dashboard Application Services Hub server to a local Dashboard
Application Services Hub server. All remote content is copied to the local content
directory at JazzSM_HOME/products/tnpm/dataview/legacy/content.
Location
<DASH_location>/products/tnpm/dataview/legacy/bin
Required privileges
Adequate privileges are required to read and write files to the file system. You
must run this command from the UNIX command line as the Tivoli Netcool
Performance Manager UNIX user (by default, pvuser), or a user with similar or
greater privileges.
Syntax
Parameters
<DASH_username>
A Dashboard Application Services Hub user name for the local Dashboard
Application Services Hub.
<DASH_password>
The Dashboard Application Services Hub user password for the local
Dashboard Application Services Hub.
<source_username>
A Dashboard Application Services Hub user name for the remote Dashboard
Application Services Hub.
Optional parameter
<pattern>
The name pattern that identifies the types of files to filter for the
synchronization. Wildcards * and ? are supported. To synchronize all files, omit
the pattern; do not use * on its own to synchronize all files.
Example
The following command synchronizes the DataView custom .jsp file content from
a remote Dashboard Application Services Hub or Tivoli Integrated Portal to a local
Dashboard Application Services Hub:
./synchronize.sh -dashuser smadmin -dashpassword
smadmin1 -sourceuser tipadmin -sourcepassword
tipadmin -sourceurl http://10.44.240.70:16710/PV -pattern *.jsp
IBM may not offer the products, services, or features discussed in this document in
other countries. Consult your local IBM representative for information on the
products and services currently available in your area. Any reference to an IBM
product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product,
program, or service that does not infringe any IBM intellectual property right may
be used instead. However, it is the user's responsibility to evaluate and verify the
operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter
described in this document. The furnishing of this document does not grant you
any license to these patents. You can send license inquiries, in writing, to:
IBM may use or distribute any of the information you provide in any way it
believes appropriate without incurring any obligation to you.
Licensees of this program who wish to have information about it for the purpose
of enabling: (i) the exchange of information between independently created
programs and other programs (including this one) and (ii) the mutual use of the
information which has been exchanged, should contact:
The licensed program described in this document and all licensed material
available for it are provided by IBM under terms of the IBM Customer Agreement,
IBM International Program License Agreement or any equivalent agreement
between us.
The client examples cited are presented for illustrative purposes only. Actual
performance results may vary depending on specific configurations and operating
conditions.
All IBM prices shown are IBM's suggested retail prices, are current and are subject
to change without notice. Dealer prices may vary.
This information is for planning purposes only. The information herein is subject to
change before the products described become available.
This information contains examples of data and reports used in daily business
operations. To illustrate them as completely as possible, the examples include the
names of individuals, companies, brands, and products. All of these names are
fictitious and any similarity to actual people or business enterprises is entirely
coincidental.
COPYRIGHT LICENSE:
Each copy or any portion of these sample programs or any derivative work must
include a copyright notice as follows:
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of
International Business Machines Corp., registered in many jurisdictions worldwide.
Other product and service names might be trademarks of IBM or other companies.
A current list of IBM trademarks is available on the web at "Copyright and
trademark information" at www.ibm.com/legal/copytrade.shtml.
Adobe, Acrobat, PostScript and all Adobe-based trademarks are either registered
trademarks or trademarks of Adobe Systems Incorporated in the United States,
other countries, or both.
Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo,
Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or
registered trademarks of Intel Corporation or its subsidiaries in the United States
and other countries.
UNIX is a registered trademark of The Open Group in the United States and other
countries.
Notices 317
Java and all Java-based trademarks and logos
are trademarks or registered trademarks of
Oracle and/or its affiliates.
Linear Tape-Open, LTO, the LTO Logo, Ultrium, and the Ultrium logo are
trademarks of HP, IBM Corp. and Quantum in the U.S. and other countries.
Applicability
These terms and conditions are in addition to any terms of use for the IBM
website.
Personal use
You may reproduce these publications for your personal, noncommercial use
provided that all proprietary notices are preserved. You may not distribute, display
or make derivative work of these publications, or any portion thereof, without the
express consent of IBM.
Commercial use
You may reproduce, distribute and display these publications solely within your
enterprise provided that all proprietary notices are preserved. You may not make
derivative works of these publications, or reproduce, distribute or display these
publications or any portion thereof outside your enterprise, without the express
consent of IBM.
Rights
IBM reserves the right to withdraw the permissions granted herein whenever, in its
discretion, the use of the publications is detrimental to its interest or, as
determined by IBM, the above instructions are not being properly followed.
You may not download, export or re-export this information except in full
compliance with all applicable laws and regulations, including all United States
export laws and regulations.
Notices 319
320 IBM Tivoli Netcool Performance Manager: Installation Guide
IBM®
Printed in USA