Problem: Phase 1 - Checking System Requirements For New Node
Problem: Phase 1 - Checking System Requirements For New Node
How to add a new node into an existing Storage Foundation for Oracle RAC (SFORARAC)
cluster
Solution
This is the recommended method for adding a new node to an existing SFORARAC cluster.
Make sure that the new systems you add to the cluster meet all of the requirements for installing
and using Storage Foundation for Oracle RAC.
✔ The new system must have the identical operating system and patch level as the existing
systems.
✔ Remove temporary or demo Storage Foundation for Oracle RAC license on the new node and
use a permanent license for the new node.
✔ Make sure you use a text window of 80 columns minimum by 24 lines minimum; 80 columns
by 24 lines is the recommended size for the optimum display of the installsfrac script.
✔ Make sure that the file /etc/fstab contains only valid entries, each of which specifies a file
system that can be mounted.
Phase 3 consists of checking the readiness for installing Storage Foundation for Oracle RAC and
performing the installation.
2. Insert the VERITAS software disc containing the product into the new system's CD ROM drive.
The Solaris volume-management utility automatically mounts the CD.
# cd /cdrom/storage_foundation_for_oracle_rac
4. Run installsfrac script with the precheck option to verify that the current operating system level,
patch level, licenses, and disk space are adequate to enable a successful installation:
The utility's precheck function proceeds non-interactively. When complete, the utility displays the
results of the check and saves the results of the check in a log file.
5. If the check is successful, proceed to run installsfrac with the -installonly option. If the precheck
function indicates licensing is required, you may proceed and add the license when running the
installation utility. Otherwise, the precheck function indicates required actions.
1. On the new system, use the -installonly option, which enables you to install Storage Foundation
for Oracle RAC on the new system without performing configuration. The configuration from the
existing cluster systems is to be used.
# ./installsfrac -installonly
1. To start VERITAS Volume Manager on the new node, use the vxinstall utility. As you run the
utility, answer "N" to prompts about licensing because you installed the appropriate license when
you ran the installsfrac utility
# vxinstall
VxVM uses license keys to control access. If you have a SPARC storage Array (SSA) controller or
a Sun Enterprise Network Array (SENA) controller attached to your system, then VxVM will
grant you a limited use license automatically. The SSA and/or SENA license grants you
unrestricted use of disks attached to an SSA or SENA controller, but disallows striping, RAID-5,
and Dmp on non-SSA and non-SENA disks. If you are not running an SSA or ENA controller,
then you must obtain a license key to operate.
Licensing information:System host ID: 80c44b4cHost type: SUNW,Ultra-250SPARCstorage
Array or Sun Enterprise Network Array: No arrays found
Some licenses are already installed. Do you wish to review them[y,n,q] (default: y) n
Do you wish to enter another license key [y,n,q] (default: n)
2. Answer "N" to when prompted to select enclosure-based naming for all disks.
Do you want to use enclosure based names for all disks ?[y,n,q,?] (default: n)
Sep 24 14:22:06 thor181 vxdmp: NOTICE: VxVM vxdmp V-5-0-34 addeddisk array DISKS,
datype = Disk
Starting the cache deamon, vxcached.
1. On the new system, modify the file /etc/system to set the shared memory parameter. The value
of the shared memory parameter is put to effect when the system restarts.
2. Edit the file /etc/llthosts on the two existing systems. Using vi or another text editor, add the
line for the new node to the file. The file should resemble:
0 galaxy
1 nebula
2 saturn
3. On the new system, use vi or another text editor to create the file /etc/llthosts. The file must be
identical to the modified file in step 2.
set-node saturn
set-cluster 7
link qfe:0 /dev/qfe:0 - ether --
link qfe:1 /dev/qfe:1 - ether --
Except for the first line that refers to the system, the file resembles the /etc/llttab files on the other
two nodes. The second line must be the same on all of the nodes.
5. Use vi or another text editor to create the file /etc/gabtab on the new system. It should resemble
the following example:
/sbin/gabconfig -c -nN
Where N represents the number of systems in the cluster. For a three-system cluster, N would
equal 3.
6. Edit the /etc/gabtab file on each of the existing systems, changing the content to match the file
on the new system.
7. Set up the /etc/vcsmmtab and /etc/vxfendg files on the new system by copying them from one
of the other existing nodes:
8. Run the commands to start LLT and GAB configuration command on the new node:
# /etc/init.d/llt.rc start
# /etc/init.d/gab start
9. On the new node, start the VXFEN, VCSMM, and LMX drivers. Use the following commands
in the order shown:
# /etc/init.d/vxfen start
# /etc/init.d/vcsmm start
# /etc/init.d/lmx start
# umount /dev/odm
# /etc/init.d/odm start
11. On the new node, verify that the GAB port memberships are a, b, d, and o. Run the command:
# /sbin/gabconfig -a
GAB Port
Memberships============================================================
===
Port a gen 4a1c0001 membership 012
Port b gen 4de40001 membership 012
Port d gen 40100001 membership 012
Port o gen c34ecd03 membership 012
1. On one of the existing nodes, check the groups dependent on the CVM service group:
# haconf -makerw
4. If the ClusterService service group is configured, add the new system to its system list and
specify a failover priority of 2:
5. If the ClusterService service group is configured, add the new system to the service group's
AutoStartList:
6. Add the new system to the cvm service group system list and specify a failover priority of 2:
8. Add the new system and its node ID (refer to the /etc/llthosts changes in step 2 on page 160) to
the cvm_cluster resource:
9. If the IP resource is not part of the cvm service group, skip to the next step. If the IP is part of
the cvm service group, add the new system to the IP resource:
# hares -modify listener_ip Address 10.182.2.130 -sys saturn
10. If the listener name is the default, skip this step. Otherwise, add the local listener name to the
Netlsnr resource:
12. On each of the existing nodes, run the following command to enable them to recognize the
new node:
1. On the new cluster system, create a local group and local user for Oracle. Be sure to assign the
same group ID, user ID, and home directory as exist on the two current cluster systems. For
example, enter:
# passwd oracle
While installing Oracle on the new system, make sure that the file /var/opt/oracle/srvConfig.loc is
identical, is in the same location, and has the same permissions and owner as on the existing
systems. If necessary, copy the file from one of the other systems.
Edit the listener.ora file on the new system to specify the IP address (or the virtual IP address) for
the new system
If Oracle system binaries are installed on a cluster file system, set up the new system for Oracle.
On the new system, log in as root and copy the directory /var/opt/oracle from one of the existing
systems into place:
As the new node boots, the VCS configuration is propagated from the existing cluster nodes to the
new node. All the configuration files located in the /etc/VRTSvcs/conf/config directory, including
main.cf, CVMTypes.cf, CFSTypes.cf and OracleTypes.cf, are identical on each node.
5. Run the following command to verify that the CVM group is configured and online on each
node, including the new node:
# hastatus -sum
6. On one of the existing nodes, run the following command to ensure the new node is recognized:
# /etc/vx/bin/vxclustadm nidmap
a. If SRVM is in a separate cluster file system, create the mount point for it, set the permissions,
and mount it. For example:
8. Whether you installed Oracle9i locally or on shared storage, run the global services daemon
(gsd) in the background on the new system as user oracle.
For example, for Oracle9i Release 1, where $ORACLE_HOME equals /oracle/VRT, enter:
$ /oracle/VRT/bin/gsd.sh &
For Oracle9i Release 2, where $ORACLE_HOME equals /oracle/VRT, for example, enter:
$ /oracle/VRT/bin/gsdctl start
2. If you use in-depth monitoring for the database, create the table for the database instance.
Create the table on the new system. Refer to the VERITAS Cluster Server Enterprise Agent for
Oracle, Installation and Configuration Guide for instructions on creating the table.
b. Mount the ODM directory. Re-mounting the ODM directory configures port d and re-links the
ODM libraries with SFO-RAC:
# mount /dev/odm
4. Create a mount point directory for the database file system. For example:
# mkdir /oradb1
6. Mount the file system with the database on the new system:
# mount -F vxfs -o cluster /dev/vx/dsk/oradbdg1/oradb1vol /oradb1
7. Login as Oracle user and attempt to manually start the new instance; the following example is
for a third system:
$ export ORACLE_SID=rac3$ sqlplus '/as sysdba'sqlplus> startup pfile=/oracle/orahome/
dbs/initrac3.ora
8. After the new Oracle instance is brought up manually on the new system, place the instance
under VCS control.
a. Add the new system to the SystemList. For example, where the existing nodes, galaxy and
nebula are nodes 0 and 1, saturn, the new node, would be node 2:
c. Modify the Sid (system ID) and Pfile (parameter file location) attributes of the Oracle resource.
For example:
d. If you have created a table for in-depth monitoring, modify the Table attribute of the Oracle
resource,. For example:
9. To verify the configuration, enter the following command on the new system:
# hastop -local
10. Verify that all resources come on online after starting VCS on the new system:
# hastart