0% found this document useful (0 votes)
71 views9 pages

Problem: Phase 1 - Checking System Requirements For New Node

The document provides a 7-phase solution to adding a new node to an existing Storage Foundation for Oracle RAC (SFORARAC) cluster. The phases include: 1) checking system requirements for the new node; 2) physically adding the new system and connecting it to shared storage; 3) installing SFORARAC on the new system; 4) configuring VxVM; 5) configuring cluster communication and coordination drivers LLT, GAB, VCSMM, and VXFEN; 6) modifying the VCS configuration to include the new node; and 7) setting up the Oracle software with matching user/group IDs and file permissions to existing nodes.

Uploaded by

liuyl
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
71 views9 pages

Problem: Phase 1 - Checking System Requirements For New Node

The document provides a 7-phase solution to adding a new node to an existing Storage Foundation for Oracle RAC (SFORARAC) cluster. The phases include: 1) checking system requirements for the new node; 2) physically adding the new system and connecting it to shared storage; 3) installing SFORARAC on the new system; 4) configuring VxVM; 5) configuring cluster communication and coordination drivers LLT, GAB, VCSMM, and VXFEN; 6) modifying the VCS configuration to include the new node; and 7) setting up the Oracle software with matching user/group IDs and file permissions to existing nodes.

Uploaded by

liuyl
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 9

Problem

How to add a new node into an existing Storage Foundation for Oracle RAC (SFORARAC)
cluster

Solution

This is the recommended method for adding a new node to an existing SFORARAC cluster.

Phase 1 - Checking System Requirements for New Node

Make sure that the new systems you add to the cluster meet all of the requirements for installing
and using Storage Foundation for Oracle RAC.
✔ The new system must have the identical operating system and patch level as the existing
systems.
✔ Remove temporary or demo Storage Foundation for Oracle RAC license on the new node and
use a permanent license for the new node.
✔ Make sure you use a text window of 80 columns minimum by 24 lines minimum; 80 columns
by 24 lines is the recommended size for the optimum display of the installsfrac script.
✔ Make sure that the file /etc/fstab contains only valid entries, each of which specifies a file
system that can be mounted.

Phase 2 - Physically Adding New System to the Cluster


The new system must have the identical operating system and patch level as the existing systems.
When you physically add the new system to the cluster, it must have private network connections
to two independent switches used by the cluster and be connected to the same shared storage
devices as the existing nodes.
After installing Storage Foundation for Oracle RAC on the new system and starting VxVM, the
new system can access the same shared storage devices. It's important that the shared storage
devices, including coordinator disks, are exactly the same among all nodes. If the new node does
not see the same disks as the existing nodes, it will be unable to join the cluster, as indicated by an
error from CVM on the console.

Phase 3 - Installing Storage Foundation for Oracle RAC on New System

Phase 3 consists of checking the readiness for installing Storage Foundation for Oracle RAC and
performing the installation.

Checking the New System for Installation Readiness


1. Log in as root to the new system. For example, the new system is saturn.

2. Insert the VERITAS software disc containing the product into the new system's CD ROM drive.
The Solaris volume-management utility automatically mounts the CD.

3. Change to the directory containing the VCS licensing package:

# cd /cdrom/storage_foundation_for_oracle_rac

4. Run installsfrac script with the precheck option to verify that the current operating system level,
patch level, licenses, and disk space are adequate to enable a successful installation:

# ./installsfrac -precheck saturn

The utility's precheck function proceeds non-interactively. When complete, the utility displays the
results of the check and saves the results of the check in a log file.

5. If the check is successful, proceed to run installsfrac with the -installonly option. If the precheck
function indicates licensing is required, you may proceed and add the license when running the
installation utility. Otherwise, the precheck function indicates required actions.

Installing Storage Foundation for Oracle RAC Without Configuration

1. On the new system, use the -installonly option, which enables you to install Storage Foundation
for Oracle RAC on the new system without performing configuration. The configuration from the
existing cluster systems is to be used.

# ./installsfrac -installonly

Phase 4 - Running vxinstall

1. To start VERITAS Volume Manager on the new node, use the vxinstall utility. As you run the
utility, answer "N" to prompts about licensing because you installed the appropriate license when
you ran the installsfrac utility

# vxinstall

VxVM uses license keys to control access. If you have a SPARC storage Array (SSA) controller or
a Sun Enterprise Network Array (SENA) controller attached to your system, then VxVM will
grant you a limited use license automatically. The SSA and/or SENA license grants you
unrestricted use of disks attached to an SSA or SENA controller, but disallows striping, RAID-5,
and Dmp on non-SSA and non-SENA disks. If you are not running an SSA or ENA controller,
then you must obtain a license key to operate.
Licensing information:System host ID: 80c44b4cHost type: SUNW,Ultra-250SPARCstorage
Array or Sun Enterprise Network Array: No arrays found
Some licenses are already installed. Do you wish to review them[y,n,q] (default: y) n
Do you wish to enter another license key [y,n,q] (default: n)

2. Answer "N" to when prompted to select enclosure-based naming for all disks.

Do you want to use enclosure based names for all disks ?[y,n,q,?] (default: n)
Sep 24 14:22:06 thor181 vxdmp: NOTICE: VxVM vxdmp V-5-0-34 addeddisk array DISKS,
datype = Disk
Starting the cache deamon, vxcached.

3. Answer "N" to setup a disk group for the system.

Do you want to setup a system wide default disk group?[y,n,q,?] (default: y) n


The installation is successfully completed.
4. Verify that the daemons are up and running. When you enter the command: # vxdisk list

The output should display the shared disks without errors.

Phase 5 - Configuring LLT, GAB, VCSMM, and VXFEN Drivers

1. On the new system, modify the file /etc/system to set the shared memory parameter. The value
of the shared memory parameter is put to effect when the system restarts.

2. Edit the file /etc/llthosts on the two existing systems. Using vi or another text editor, add the
line for the new node to the file. The file should resemble:

0 galaxy
1 nebula
2 saturn

3. On the new system, use vi or another text editor to create the file /etc/llthosts. The file must be
identical to the modified file in step 2.

4. Create an /etc/llttab file on the new system. For example:

set-node saturn
set-cluster 7
link qfe:0 /dev/qfe:0 - ether --
link qfe:1 /dev/qfe:1 - ether --

Except for the first line that refers to the system, the file resembles the /etc/llttab files on the other
two nodes. The second line must be the same on all of the nodes.

5. Use vi or another text editor to create the file /etc/gabtab on the new system. It should resemble
the following example:

/sbin/gabconfig -c -nN

Where N represents the number of systems in the cluster. For a three-system cluster, N would
equal 3.

6. Edit the /etc/gabtab file on each of the existing systems, changing the content to match the file
on the new system.

7. Set up the /etc/vcsmmtab and /etc/vxfendg files on the new system by copying them from one
of the other existing nodes:

# rcp galaxy:/etc/vcsmmtab /etc


# rcp galaxy:/etc/vxfendg /etc

8. Run the commands to start LLT and GAB configuration command on the new node:

# /etc/init.d/llt.rc start
# /etc/init.d/gab start

9. On the new node, start the VXFEN, VCSMM, and LMX drivers. Use the following commands
in the order shown:

# /etc/init.d/vxfen start
# /etc/init.d/vcsmm start
# /etc/init.d/lmx start

10. Unmount ODM directory to unconfigure it and restart it:

# umount /dev/odm
# /etc/init.d/odm start

11. On the new node, verify that the GAB port memberships are a, b, d, and o. Run the command:

# /sbin/gabconfig -a

GAB Port
Memberships============================================================
===
Port a gen 4a1c0001 membership 012
Port b gen 4de40001 membership 012
Port d gen 40100001 membership 012
Port o gen c34ecd03 membership 012

Phase 6 - Configuring CVM


You can modify the VCS configuration using one of three possible methods. You can edit
/etc/VRTSvcs/conf/config/main.cf (the VCS configuration file) directly, you can use the VCS GUI
(Cluster Manager), or you can use the command line, as illustrated in the following example.

1. On one of the existing nodes, check the groups dependent on the CVM service group:

# hagrp -dep cvm


Parent Child Relationship
oradb1_grp cvm online local firm
oradb2_grp cvm online local firm

2. Make the VCS configuration writeable:

# haconf -makerw

3. Add the new system to the cluster:

# hasys -add saturn

4. If the ClusterService service group is configured, add the new system to its system list and
specify a failover priority of 2:

# hagrp -modify ClusterService SystemList -add saturn 2

5. If the ClusterService service group is configured, add the new system to the service group's
AutoStartList:

# hagrp -modify ClusterService AutoStartList galaxy nebula saturn

6. Add the new system to the cvm service group system list and specify a failover priority of 2:

# hagrp -modify cvm SystemList -add saturn 2

7. Add the new system to the cvm service group AutoStartList:

# hagrp -modify cvm AutoStartList galaxy nebula saturn

8. Add the new system and its node ID (refer to the /etc/llthosts changes in step 2 on page 160) to
the cvm_cluster resource:

# hares -modify cvm_clus CVMNodeId -add saturn 2

9. If the IP resource is not part of the cvm service group, skip to the next step. If the IP is part of
the cvm service group, add the new system to the IP resource:
# hares -modify listener_ip Address 10.182.2.130 -sys saturn

10. If the listener name is the default, skip this step. Otherwise, add the local listener name to the
Netlsnr resource:

# hares -modify LISTENER Listener listener_saturn -sys saturn

11. Save the new configuration to disk:

# haconf -dump -makero

12. On each of the existing nodes, run the following command to enable them to recognize the
new node:

# /etc/vx/bin/vxclustadm -m vcs -t gab reinit

Phase 7 - Setting Up/Installing Oracle

1. On the new cluster system, create a local group and local user for Oracle. Be sure to assign the
same group ID, user ID, and home directory as exist on the two current cluster systems. For
example, enter:

# groupadd -g 99 dba# useradd -g dba -u 999 -d /oracle oracle

Create a password for the user oracle:

# passwd oracle

While installing Oracle on the new system, make sure that the file /var/opt/oracle/srvConfig.loc is
identical, is in the same location, and has the same permissions and owner as on the existing
systems. If necessary, copy the file from one of the other systems.

Edit the listener.ora file on the new system to specify the IP address (or the virtual IP address) for
the new system

If Oracle system binaries are installed on a cluster file system, set up the new system for Oracle.

On the new system, log in as root and copy the directory /var/opt/oracle from one of the existing
systems into place:

# rcp -rp galaxy:/var/opt/oracle /var/opt

4. Restart the new node:


# shutdown -y -i6

As the new node boots, the VCS configuration is propagated from the existing cluster nodes to the
new node. All the configuration files located in the /etc/VRTSvcs/conf/config directory, including
main.cf, CVMTypes.cf, CFSTypes.cf and OracleTypes.cf, are identical on each node.

5. Run the following command to verify that the CVM group is configured and online on each
node, including the new node:

# hastatus -sum

6. On one of the existing nodes, run the following command to ensure the new node is recognized:

# /etc/vx/bin/vxclustadm nidmap

7. On the new system, if SRVM is configured, do the following:

a. If SRVM is located in $ORACLE_HOME on shared storage or in a separate shared raw


volume, go to step 8.

a. If SRVM is in a separate cluster file system, create the mount point for it, set the permissions,
and mount it. For example:

# mkdir orasrv# chown oracle:dba /orasrv# mount -F vxfs -o cluster


/dev/vx/dsk/orasrv_dg/srvm_vol
/orasrv

8. Whether you installed Oracle9i locally or on shared storage, run the global services daemon
(gsd) in the background on the new system as user oracle.

For example, for Oracle9i Release 1, where $ORACLE_HOME equals /oracle/VRT, enter:
$ /oracle/VRT/bin/gsd.sh &
For Oracle9i Release 2, where $ORACLE_HOME equals /oracle/VRT, for example, enter:
$ /oracle/VRT/bin/gsdctl start

Phase 8 - Configuring New Oracle Instance


1. On an existing system, add a new instance. Refer to the Oracle9i Installation Guide. Highlights
of the steps to add a new instance include:

◆ Log in as the user oracle and connect to the instance.


◆ Create a new "undotbs" tablespace for the new instance; for example, if the tablespace is for
the third instance, name it "undotbs3." If the database uses raw volumes, create the volume first.
Use the same size used by the existing "undotbs" volumes.
◆ Create two new "redo" log groups for the new instance; for example, if the tablespace is for the
third instance, create the tablespaces "redo3_1" and "redo3_2." If the database uses raw volumes,
create the volume for the redo logs first. Use the size used by the existing redo volumes.
◆ Enable "thread 3" where 3 can be the number of the new instance.
◆ Prepare the init{SID.EN_US}.ora file for the new instance on the new node.
◆ If the Oracle is installed locally on the new system, prepare the directories bdump, cdump,
udump, and pfile.

2. If you use in-depth monitoring for the database, create the table for the database instance.
Create the table on the new system. Refer to the VERITAS Cluster Server Enterprise Agent for
Oracle, Installation and Configuration Guide for instructions on creating the table.

3. Configure the ODM port on the new node.


a. Unmount the ODM directory to unconfigure port d. ODM is automatically mounted on the new
system during installation, but not linked.
# umount /dev/odm

b. Mount the ODM directory. Re-mounting the ODM directory configures port d and re-links the
ODM libraries with SFO-RAC:
# mount /dev/odm

4. Create a mount point directory for the database file system. For example:
# mkdir /oradb1

5. Set the permissions for the mount point:


# chown oracle:dba /oradb1

6. Mount the file system with the database on the new system:
# mount -F vxfs -o cluster /dev/vx/dsk/oradbdg1/oradb1vol /oradb1

7. Login as Oracle user and attempt to manually start the new instance; the following example is
for a third system:
$ export ORACLE_SID=rac3$ sqlplus '/as sysdba'sqlplus> startup pfile=/oracle/orahome/
dbs/initrac3.ora

8. After the new Oracle instance is brought up manually on the new system, place the instance
under VCS control.
a. Add the new system to the SystemList. For example, where the existing nodes, galaxy and
nebula are nodes 0 and 1, saturn, the new node, would be node 2:

# haconf -makerw# hagrp -modify oradb1_grp SystemList -add saturn 2

b. Add the new system to the AutoStartList for oradb1_grp:


# hagrp -modify oradb1_grp AutoStartList galaxy nebula saturn

c. Modify the Sid (system ID) and Pfile (parameter file location) attributes of the Oracle resource.
For example:

# hares -modify VRTdb Sid rac3 -sys saturn


# hares -modify VRTdb Pfile Pfile=/oracle/orahome/dbs/initrac3.ora -sys saturn

d. If you have created a table for in-depth monitoring, modify the Table attribute of the Oracle
resource,. For example:

# hares -modify VRTdb Table vcstable_saturn -sys saturn

e. Close and save the configuration:

# haconf -dump -makero

9. To verify the configuration, enter the following command on the new system:

# hastop -local

All resources should come offline on the new system.

10. Verify that all resources come on online after starting VCS on the new system:

# hastart

You might also like